id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.16848 | Accurate Computation of Quantum Excited States with Neural Networks | We present a variational Monte Carlo algorithm for estimating the lowest
excited states of a quantum system which is a natural generalization of the
estimation of ground states. The method has no free parameters and requires no
explicit orthogonalization of the different states, instead transforming the
problem of finding excited states of a given system into that of finding the
ground state of an expanded system. Expected values of arbitrary observables
can be calculated, including off-diagonal expectations between different states
such as the transition dipole moment. Although the method is entirely general,
it works particularly well in conjunction with recent work on using neural
networks as variational Ans\"atze for many-electron systems, and we show that
by combining this method with the FermiNet and Psiformer Ans\"atze we can
accurately recover vertical excitation energies and oscillator strengths on a
range of molecules. Our method is the first deep learning approach to achieve
accurate vertical excitation energies, including challenging double
excitations, on benzene-scale molecules. Beyond the chemistry examples here, we
expect this technique will be of great interest for applications to atomic,
nuclear and condensed matter physics. | David Pfau, Simon Axelrod, Halvard Sutterud, Ingrid von Glehn, James S. Spencer | 2023-08-31T16:27:08Z | http://arxiv.org/abs/2308.16848v3 | # Natural Quantum Monte Carlo Computation of Excited States
###### Abstract
We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system which is a natural generalization of the estimation of ground states. The method has no free parameters and requires no explicit orthogonalization of the different states, instead transforming the problem of finding excited states of a given system into that of finding the ground state of an expanded system. Expected values of arbitrary observables can be calculated, including off-diagonal expectations between different states such as the transition dipole moment. Although the method is entirely general, it works particularly well in conjunction with recent work on using neural networks as variational Ansatze for many-electron systems, and we show that by combining this method with the FermiNet and Posiformer Ansatze we can accurately recover vertical excitation energies and oscillator strengths on molecules as large as benzene. Beyond the examples on molecules presented here, we expect this technique will be of great interest for applications of variational quantum Monte Carlo to atomic, nuclear and condensed matter physics.
## I Introduction
The computation of excited states properties of quantum systems is a fundamental challenge in chemistry and many branches of physics. Understanding electronic excitations is critical for predicting photochemical phenomena such as fluorescence and conformational changes in the presence of light [1; 2]. In condensed matter physics, excitations determine the optical band gap of semiconductors, which is critical for predicting the behavior of solar cells, photosensors, LEDs and lasers [3]. Excited states are also relevant to understanding nuclear phenomena like metastable isomers and electron capture [4]. Ultimately, the dynamics of quantum systems when stimulated cannot be understood without taking excited states into account. Despite the importance of excited states for quantum phenomena, a full computational account of excited states remains challenging.
Quantum Monte Carlo (QMC) methods [5; 6] are an appealing class of algorithms for computing the behavior of quantum systems due do the favorable scaling with the number of particles, typically \(\mathcal{O}(N^{3})-\mathcal{O}(N^{4})\), and wide applicability. Variational quantum Monte Carlo (VMC) in particular is quite conceptually simple, and consists of finding an explicit functional form for a wavefunction which minimizes a variational bound, but historically was not considered accurate enough on its own for many demanding applications. Recent work using neural networks as a wavefunction Ansatz has reinvigorated interest in VMC [7; 8], and has demonstrated that VMC can be competitive with state-of-the-art methods for ground state calculations.
In this paper, we focus on computing excited states of quantum systems by VMC. When used to optimize ground states, there are only two variational principles for QMC - energy minimization and variance minimization. Innovations in ground state VMC primarily focus on the choice of trial wavefunction [9; 10], or optimization method used to achieve the variational bound [11; 12], but the choice of objective to optimize is well-established. The same cannot be said for variational optimization of excited states.
Approaches for computing excited states by VMC can be broken down into several categories. Most methods are either state-_targeting_, in that they aim to find a single excited state, or state-_averaging_, in that they aim to find the lowest-lying exciting states by minimizing the total weighted energy of many states simultaneously. Among state-targeting methods, there are methods which target specific energy ranges [13; 14], specific symmetries of the system [15], or a specific ordering of the roots (i.e. the \(k\)-th lowest state) [16]. For state-averaging approaches, the different states must be kept orthogonal, which can be achieved by including a penalty term in the variational bound which pushes the states apart [17; 15; 18], or by explicitly constructing orthogonal Ansatze, sometimes repeatedly re-orthogonalizing during optimization [19; 20; 21; 22].
All of these approaches have drawbacks and limitations. Targeting specific symmetries or energy ranges requires prior knowledge about the states of interest which may not be available, and state-targeting by variance minimization can lose track of the desired state [21]. Root-targeting methods are prone to root-flipping, whether they are used for QMC or other computational paradigms [23; 24]. Some methods require solving a generalized eigenvalue problem from stochastic estimates of the Hamiltonian and overlap matrices, which introduces biases into the gradients [16; 25]. Penalty methods often have problems with multiple Ansatze collapsing onto the same state, or have biased gradients [18], and the
strength of the penalty term is a free parameter which must be chosen. Constructing orthogonal Ansatze is usually only possible when the Ansatz is a linear combination of basis set functions [26; 27], which rules out many recently-developed Ansatze based on deep neural networks [28; 29; 30; 7]. Heuristics such as variance matching may be required to achieve good numerical results for all approaches. Despite almost four decades of work on QMC methods for excited states [26; 31], no single variational principle has emerged which has no free parameters, has convergence guarantees when optimizing with noisy Monte Carlo estimates, and is applicable to all possible Ansatze and all excited states, regardless of symmetry.
Here we present a new variational principle for computing the lowest excited states of a quantum system by Monte Carlo which does not suffer from any of these limitations. Our method can be seen as a state-averaging approach with a particular choice of sampling distribution which does not require the states to be orthogonal. This choice of sampling distribution is equivalent to reformulating the problem of finding \(K\) excited states of an \(N\) particle system into the problem of finding the ground state of a \(K\)-fermion system where each fermion is equivalent to \(N\) particles in the original system. Instead of orthogonalizing the states, the local energy is promoted from a scalar to a matrix, which gives unbiased estimates of a matrix whose eigenvalues are the energies of orthogonal states. Because wavefunction optimization can be done by stochastic gradient descent from unbiased noisy estimates of the total energy, the procedure is guaranteed to converge to a local minimum of the total energy over states. Due to the many desirable mathematical properties which follow from the choice of sampling distribution, we refer to our proposed approach as _natural excited states_ for VMC (NES-VMC).
## II Method
### Variational Monte Carlo
First we briefly review ground-state VMC and establish some notation. We will stick to the notation of first quantization and consider a system of \(N\) particles with states \(\mathbf{x}=\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\), although everything we discuss could be applied to variational Ansatze represented in second quantization as well. We aim to find the lowest eigenfunction of a Hamiltonian operator \(\hat{H}\). This can be done by reformulating the eigenfunction problem in variational form, as one of finding the minimum of the Rayleigh quotient:
\[\psi^{*}=\arg\min_{\psi}\frac{\langle\psi\hat{H}\psi\rangle}{\langle\psi^{2} \rangle} \tag{1}\]
where the Ansatz \(\psi\) is not necessarily normalized. Computing this quotient involves taking high-dimensional integrals over all possible particle states \(\mathbf{x}\), and can be approximated by Monte Carlo integration. Many choices of Monte Carlo sampling distribution \(p(\mathbf{x})\) are possible, but if \(p(\mathbf{x})\propto\psi^{2}(\mathbf{x})\), then the Rayleigh quotient take a simple form that allows for unbiased empirical estimation of the energy and gradients of the energy:
\[\frac{\langle\psi\hat{H}\psi\rangle}{\langle\psi^{2}\rangle}=\mathbb{E}_{ \mathbf{x}\sim\psi^{2}}\left[\psi^{-1}(\mathbf{x})\hat{H}\psi(\mathbf{x})\right] \tag{2}\]
For this reason, \(\psi^{2}\) is the natural choice of sampling distribution for ground state estimation. The scalar \(E_{L}(\mathbf{x})\mathbf{\triangleq}\psi^{-1}(\mathbf{x})\hat{H}\psi(\mathbf{ x})\) that appears inside the expectation is the _local energy_, and at any eigenfunction of \(\hat{H}\) it will be constant if \(\hat{H}\) is a local operator.
### Natural Excited States
Going from ground states to excited states, we aim to find the lowest \(K\) eigenfunctions of \(\hat{H}\). We refer to a single set of \(N\) particle states as a _particle set_, and denote different particle sets with an upper index, so that \(\mathbf{x}^{i}\) denotes a set of \(N\) particles \(\mathbf{x}^{i}_{1},\ldots,\mathbf{x}^{i}_{N}\). For the remainder of the article, we will use \(\mathbf{x}\) to denote the complete state of all particle sets \(\mathbf{x}^{1},\ldots,\mathbf{x}^{K}\). Let \(\psi_{i}\) denote a (possibly unnormalized) N-particle wavefunction, then we are trying to find wavefunctions \(\psi_{1},\ldots,\psi_{K}\) which approximate the lowest excited states. Let \(\mathbf{\Psi}(\mathbf{x})\in\mathbb{R}^{K\times K}\) denote the matrix combining all electron sets with all wavefunctions:
\[\mathbf{\Psi}(\mathbf{x})\overset{\triangle}{\equiv}\begin{pmatrix}\psi_{1}( \mathbf{x}^{1})&\ldots&\psi_{K}(\mathbf{x}^{1})\\ \vdots&&\vdots\\ \psi_{1}(\mathbf{x}^{K})&\ldots&\psi_{K}(\mathbf{x}^{K})\end{pmatrix} \tag{3}\]
The determinant of this matrix \(\Psi(\mathbf{x})=\det(\mathbf{\Psi}(\mathbf{x}))\) can be thought of as an unnormalized Slater determinant, except that instead of single-particle orbitals, it is made up of N-particle wavefunctions. We call \(\Psi(\mathbf{x})=\det(\mathbf{\Psi}(\mathbf{x}))\) the _total Ansatz_, while the individual \(\psi_{i}\) are the _single-state Ansatze_.
Rather than optimizing the single-state Ansatze in order from lowest to highest energy, we will only optimize the total Ansatz to minimize the total energy of all states. This is conceptually quite similar to state-averaging approaches in VMC, except that we will not explicitly enforce the orthogonality of the different single-state Ansatze. Note that taking any linear combination of single-state Ansatze \(\psi^{\prime}{}_{i}=\sum_{j}a_{ij}\psi_{j}\) only changes the total Ansatz by a constant factor. Also note that if two single-state Ansatze are the same, the total Ansatz becomes zero. Thus, by representing the total Ansatz as a determinant of single-state Ansatze, we can prevent the collapse of different Ansatze onto the same state, without requiring them to be orthogonal.
For an arbitrary operator \(\hat{\mathcal{O}}\) that acts on \(N\)-particle wavefunctions, let \(\hat{\mathcal{O}}\mathbf{\Psi}(\mathbf{x})\) denote the matrix of all values
of this operator applied to all single-state Ansatze and particle sets:
\[\hat{\mathcal{O}}\boldsymbol{\Psi}(\mathbf{x})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Not only is diagonalizing \(\mathbb{E}_{\Psi^{2}}[\mathbf{E}_{L}(\mathbf{x})]\) sufficient to recover the energies - it also provides us with the necessary change of basis to evaluate other observables \(\hat{\mathcal{O}}\), even off-diagonal observables \(\langle\psi_{i}\hat{\mathcal{O}}\psi_{j}\rangle\) between states. This can be seen due to the identity \(\mathbb{E}_{\Psi^{2}}[\mathbf{\Psi}^{-1}\hat{\mathcal{O}}\mathbf{\Psi}]= \mathbf{S}^{-1}\hat{\mathbf{O}}\), and for single-state Ansatze which are a linear combination of eigenfunctions, \(\mathbf{S}^{-1}\hat{\mathbf{O}}=\mathbf{A}^{-1}\hat{\mathbf{O}}^{*}\mathbf{A}\). So if we accumulate and diagonalize \(\mathbb{E}_{\Psi^{2}}[\mathbf{E}_{L}(\mathbf{x})]\) and use the resulting eigenvectors to compute \(\mathbf{U}^{-1}\mathbb{E}_{\Psi^{2}}[\mathbf{\Psi}^{-1}\hat{\mathcal{O}} \mathbf{\Psi}]\mathbf{U}\), then in the vicinity of the true ground state of the total Ansatz the result will be approximately \(\mathbf{\Sigma}^{-1}\hat{\mathbf{O}}^{*}\mathbf{\Sigma}\). Along the diagonal, this gives exactly the expectations \(\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{i}^{*}\rangle\). Off the diagonal, this yields \(\frac{\sigma_{i}}{\sigma_{j}}\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{j}^{*}\rangle\). If we multiply the matrix elementwise by its transpose, the \(\sigma_{i}\) terms cancel out, and we recover \(\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{j}^{*}\rangle^{2}\), which gives the expectation up to a sign factor. This sign factor is not physically observable however, and in practice for computing quantities like the oscillator strength, only the expectation squared is needed.
### Neural Network Ansatze
The use of variational Monte Carlo for ground state calculations was typically used to find a trial wavefunction for more accurate projector QMC methods like diffusion Monte Carlo [5] or auxiliary field Monte Carlo [33]. However, in recent years, advances in deep neural networks have led to their use as accurate Ansatze for studying spin systems [7], electronic structure [8] and nuclear systems [34], often reaching levels of accuracy rivaling projector QMC methods. This has led to a renewed interest in VMC as a standalone method. While a variety of different neural network architectures can be used depending on the problem, such as restricted Boltzmann machines [7], convolutional neural networks [35], and autoregressive models [36], a number of custom architectures have been developed specifically for many-body electronic structure problems in first quantization [28; 29; 30; 37; 38; 39; 40; 41]. Most of these Ansatze start from a linear combination of Slater determinants:
\[\psi(\mathbf{x})=\sum_{k}\omega_{k}\text{det}\begin{pmatrix}\phi_{1}^{k}( \mathbf{x}_{1})&\ldots&\phi_{N}^{k}(\mathbf{x}_{1})\\ \vdots&&\vdots\\ \phi_{1}^{k}(\mathbf{x}_{N})&\ldots&\phi_{N}^{k}(\mathbf{x}_{N})\end{pmatrix} \tag{11}\]
It has long been recognized [42] that the single-particle orbitals in a Slater determinant can be generalized to depend on _all_ particles, so long as they depend on all but one in a permutation-independent manner:
\[\psi(\mathbf{x})=\sum_{k}\omega_{k}\text{det}\begin{pmatrix}\phi_{1}^{k}( \mathbf{x}_{1};\{\mathbf{x}_{/1}\})&\ldots&\phi_{N}^{k}(\mathbf{x}_{1};\{ \mathbf{x}_{/1}\})\\ \vdots&&\vdots\\ \phi_{1}^{k}(\mathbf{x}_{N};\{\mathbf{x}_{/N}\})&\ldots&\phi_{N}^{k}(\mathbf{x }_{N};\{\mathbf{x}_{/N}\})\end{pmatrix} \tag{12}\]
where \(\{\mathbf{x}_{/i}\}\) denotes the set of all particles _except_\(\mathbf{x}_{i}\). In the event that the particles are spin-assigned, the orbitals can also be expressed as \(\phi_{i}^{k}(\mathbf{x}_{j}^{\dagger};\{\mathbf{x}_{/j}^{\dagger}\},\{\mathbf{ x}^{\dagger}\})\) where the function is only invariant to changing the order of particles of the same spin. Most neural network Ansatze for electrons in real space implement this idea by using permutation-equivariant deep neural networks to represent the orbitals, sometimes with a multiplicative Jastrow factor to account for pairwise interactions [29; 30; 38].
Extending these Ansatze to represent multiple states is quite straightforward. Each state is still expressed as a sum of determinants of generalized neural network orbitals, there are simply more orbitals:
\[\psi_{i}(\mathbf{x})=\sum_{ik}\omega_{ik}\text{det}\begin{pmatrix}\phi_{1}^{ ik}(\mathbf{x}_{1};\{\mathbf{x}_{/1}\})&\ldots&\phi_{N}^{ik}(\mathbf{x}_{1};\{ \mathbf{x}_{/1}\})\\ \vdots&&\vdots\\ \phi_{1}^{ik}(\mathbf{x}_{N};\{\mathbf{x}_{/N}\})&\ldots&\phi_{N}^{ik}(\mathbf{x }_{N};\{\mathbf{x}_{/N}\})\end{pmatrix} \tag{13}\]
Nothing is changed about the neural network architecture itself, just the number of orbitals is increased proportionally to the number of states.
Neural network Ansatze differ from classic Ansatze like the Slater-Jastrow-backflow Ansatz [9] in important ways which make it difficult to apply existing excited state methods. Many methods assume that the Ansatz is a linear combination of orthogonal basis functions like Slater determinants, a necessary assumption for maintaining the orthogonality of states, either through explicit construction or a diagonalization step [19]. Classic Ansatze are usually optimized through a small number of gradient steps, where each gradient step is accumulated over a large number of MCMC steps, so that the gradients are nearly deterministic. Most modern deep neural networks, by constrast, are optimized by stochastic gradient descent using a large number of small, noisy steps [43]. This means bias in the gradients becomes a more significant concern.
Existing work on excited state calculations with neural networks has focused on penalty methods [18; 15], but these still require choosing a free parameter trading off total energy and penalty strength, and may not exactly satisfy orthogonality in the states. Some of these methods also have biased gradients in the penalty term [18] due to nonlinearities meant to push states apart more strongly. By contrast, the NES-VMC method has no free parameters to tune, can be optimized by unbiased gradients that have the same form as for ground state calculations, does not require the states to be orthogonal, and makes no assumption on the functional form of the Ansatz. Thus, while NES-VMC is generally applicable to _all_ excited state VMC calculations, it is particularly well-tailored for use with recently developed neural network Ansatze.
## III Results
While the natural excited states method is fully general and can be applied to any quantum Hamiltonian, our experimental validation is focused on electronic structure in atoms and molecules, due to the abundant experimental and computational literature to compare against. For
all experiments, we are solving the Schrodinger equation in the Born-Oppenheimer approximation [44]:
\[\hat{H}= -\frac{1}{2}\sum_{i}\nabla_{i}^{2}+\sum_{i>j}\frac{1}{|\mathbf{r}_ {i}-\mathbf{r}_{j}|}\] \[-\sum_{iI}\frac{Z_{I}}{|\mathbf{r}_{i}-\mathbf{R}_{I}|}+\sum_{I>J }\frac{Z_{I}Z_{J}}{|\mathbf{R}_{I}-\mathbf{R}_{J}|} \tag{14}\]
where the indices \(i\) and \(j\) are over electrons and \(I\) and \(J\) are over atomic nuclei with fixed locations.
To try to disentangle the effect that the choice of Ansatz has on performance, we investigated two different neural network architectures: the Fermionic Neural Network (FermiNet) [29] and the Wavefunction Transformer (Psiformer) [38]. While the Psiformer has generally been found to be more accurate on large systems, it is also slower, and for ground state calculations up to approximately 15 electrons, no appreciable difference in accuracy between the two has been found.
### Atomic Spectra
As an initial check of the correctness of our method, we investigate the excited states of first-row atoms, from lithium to neon. Atomic spectral lines have been the subject of some of the highest-precision measurements in all of science, and while we do not aim to reach spectroscopic accuracy, we can have high confidence in accuracy of the measurements, and do not need to worry about effects such as adiabatic relaxation and zero-point vibrational energy which affect molecular measurements. All experimental data was taken from the energy level tables in the NIST Handbook of Basic Atomic Spectroscopic Data [32]. Because we are working with the nonrelativistic Schrodinger equation without spin-orbit corrections, we are not able to compute fine or hyperfine structure. To remove the fine structure, experimental energy levels with different total angular momenta are averaged together weighted by the degeneracy \(m_{J}=2J+1\) and treated as a single level. The hyperfine structure is too small to be of concern here. To investigate the effect of the choice of Ansatz as well as the choice of number of states \(k\) to compute, we ran calculations with the FermiNet with both 5 and 10 states, as well as the Psiformer with 10 states. Results are given in Fig. 1, with numerical results (including error bars) in the Appendix in Table 2.
For all atoms, NES-VMC gives results closely matching experiment. From lithium up to oxygen, the error relative to experiment is far less than 1 mHa (27.2 meV) for all but the highest excited state, and is often less than 0.1 mHa, an exceedingly high level of accuracy for a deep neural network Ansatz. On lithium, all Ansatze correctly converge to the \({}^{2}S\) and \({}^{2}P^{\circ}\) states, which are missed by the PauliNet penalty method [18]. The method struggles in some cases to get the highest energy state correct, but this seems to be improved by simply computing more states - for instance, the error in the \({}^{4}P\) states of fluorine is cut in half by increasing the number of states from 5 to 10. In rare cases, the highest state
Figure 1: Excited state energies for first row atoms from lithium to neon. Results from natural excited state VMC applied to the FermiNet (10 states, blue, 5 states, red) are shown on top of experimental results [32]. Spectral lines which match computed states are labeled with electron configurations and atomic term symbols (except for the highest levels of F and Ne, where term symbols are omitted for clarity). For all but the largest systems and highest excited states, there is excellent agreement with experiment. The discrepancy between 5 and 10 excited states is minimal except for the highest excited states of F and Ne, where computing more states increases the accuracy of a given state. Complete numerical results are given in Table 2.
seems to converge to the incorrect state, such as boron with the Psiformer, which seems to converge to the \({}^{2}P^{\circ}\) state rather than the last \({}^{2}D\) state. Fluorine and neon both have relatively large errors on the order of 1-2 mHa for low-lying states, but going from the FermiNet to the more accurate Psiformer Ansatz seems to reduce this error in all cases. The largest errors are in the highest states of fluorine and neon, where the error is significant. In this case we suspect the difficulty is due to the large number of different states with similar electron configurations and energies, and hope that by computing even more states or by using even more expressive Ansatze, the effects of individual states can be disentangled. The excellent performance on low-lying states gives us confidence that NES-VMC is mathematically sound.
### Oscillator Strengths
Going beyond results on single atoms and vertical excitation energies, we are interested in the performance of NES-VMC on more complicated molecular systems, as well as observable quantities other than the energy. The QUEST database [45; 46; 48; 49; 50; 51; 52; 53; 54] is an excellent source of well-controlled benchmark vertical excited states calculations using coupled cluster methods on molecules of various sizes, with consistent geometries and basis set extrapolations. Of particular interest is the subset of QUEST for which oscillator strengths have been computed [45], as oscillator strengths provide a strong test of how well an excited state method can perform on experimentally-observable quantities, and especially as oscillator strength and transition probability calculations are known to be highly sensitive to choices of basis set [55].
Oscillator strengths are a measure of the probability of transition between different states occurring as a result of photon emission or absorption. Under the assumption that the wavelength of the photon is much longer than the system under consideration, so the interaction can be approximated by a constant electric field, the transition dipole moment between two states gives a measure of how that transition will interact with light:
\[\mathbf{d}_{ij}=\left\langle\psi_{i}^{\dagger}\sum_{k}q_{k}\mathbf{r}_{k}\psi_ {j}\right\rangle \tag{15}\]
where the sum over \(k\) is taken over all particles in the system with charge \(q_{k}\) and position \(\mathbf{r}_{k}\). For electrons, \(q_{k}=-e\). The transition dipole moments are vector-valued quantities which include a complex phase factor, and are not directly observable. The oscillator strength
Figure 2: Vertical excitation energies and oscillator strengths for small molecules from Chrayteh _et al._[45]. Singlet states are in blue and triplet states are in gray. NES-VMC results are indicated by markers while theoretical best estimates from Chrayteh _et al._[45] or directly from QUEST [46] are given by the lines. When no data from QUEST is available, no TBE is given. Experimental results from Chrayteh _et al._[45] and references thererin are given by the dashed lines in green. Where available, energies and oscillator strengths from Entwistle _et al._[18] are provided by the black triangles for comparison, with (pointing left) and without (pointing right) variance matching. In most cases, our results on both energies and oscillator strengths agree closely with theoretical best estimates. Complete numerical results are given in Table 3.
of a particular transition can be computed from the transition dipole moment:
\[f_{ij}=\frac{2}{3}\frac{m}{\hbar^{2}}\left(E_{i}-E_{j}\right)|\mathbf{d}_{ij}|^{2} \tag{16}\]
which reduces the transition dipole moment to a dimensionless positive scalar. In natural excited states, we can compute expectations of operators between different states up to an arbitrary sign factor, and that sign factor goes away in the oscillator strength. Computational details are discussed in more detail in Sec. C.3.
We applied NES-VMC to all of the small molecules investigated in Chrayteh _et al._[45], computing the 5 lowest energy states with both the FermiNet and Psiformer. Results are presented in Fig. 2 and Table 3. Wherever possible, we take results from QUEST [45, 46] to be theoretical best estimates (TBEs) for comparison, though for many of the states we converged to, especially triplets, no results exist in QUEST. For molecules with heavier atoms (HCl, H\({}_{2}\)S, H\({}_{2}\)CSi), we found that using pseudopotentials for the heaviest atoms significantly improved the accuracy of the results, likely because the total energy scale was reduced by ignoring core electrons. Where applicable, we also include a comparison against the VMC penalty method of Entwistle _et al._[18]. We omit N\({}_{2}\) because the lowest-lying excited states are all triplets. For all diatomic systems, the \({}^{1}\Pi\) state is doubly-degenerate, and so the baseline oscillator strengths are divided by two to match the computed results.
In almost all cases, both the vertical excitation energies and the oscillator strengths are in excellent agreement with the TBE. The vertical excitation energies are almost all within chemical accuracy (1.6 mHa or 0.04 eV) of the TBE while the oscillators strengths usually diverge from the TBE by at most an amount on the order of 0.001, comparable to the uncertainty in the calculations. The results of Entwistle _et al._, in contrast, often differ noticeably from other theoretical results, even when correction using variance matching are applied. This is particularly noticeable for the oscillator strengths. We note that we do not use variance matching for any of the NES-VMC calculations.
There are a few cases where NES-VMC behaves oddly. While the FermiNet and Psiformer find nearly identical vertical excitation energies for the \({}^{1}\Pi\) state of HCl, and the FermiNet accurately predicts the oscillator strength, the Psiformer mistakenly finds this to be a dark state. On formaldehyde (CH\({}_{2}\)O), both the FermiNet and Psiformer fail to find the \({}^{3}A_{1}\) state at all, and the oscillator strength for the \({}^{1}B_{2}\) state diverges from the TBE by a significant margin, although the Psiformer halves that margin relative to the FermiNet. Vertical excitation energies for systems with heavier atoms, such as H\({}_{2}\)S, and the highest state of thioformaldehydehyde (CH\({}_{2}\)S), are not quite
Figure 3: Excited states of the carbon dimer (C\({}_{2}\)). (a) The symmetries of the different states can be identified by evaluating each single state Ansatz at location \(\mathbf{r}\) and \(-\mathbf{r}\) for parity symmetry (u/g, blue) or by flipping \(\mathbf{r}\) across the x-axis for reflection symmetry (+/–, orange). (b) The vertical and adiabatic energies of excited states of C\({}_{2}\). The green line indicates experimental energies [47] and the red line indicates the energy of the \(B^{1}\Delta_{g}\) state from QUEST [48]. Bright transitions are labelled with their oscillator strength and, when available, their names. (c) Visualization of the 8 lowest natural orbitals of C\({}_{2}\). (d) The occupancy of the different natural orbitals for the different excited states of C\({}_{2}\), identified from the density matrix of each state. The \(a^{3}\Pi_{u}\) through \(A^{1}\Pi_{u}\) states are single excitations while the last two states are double excitations. Complete numerical results are given in Table 4.
as accurate as other results, though in the case of thioformaldehyde we are hopeful that, consistent with the atomic results in the previous section, computing more states will reduce the error in the \({}^{3}B_{2}\) state. For nitroxyl (HNO), the FermiNet fails to converge to the \({}^{1}A^{\prime}\) state, but the Psiformer finds it correctly, albeit with a relatively large error in the vertical excitation energy. This suggests that there are occasional difficulties in getting NES-VMC to converge to all low-lying states, but we are hopeful that improvements in optimization methods can improve this in the future. What is clear is that NES-VMC works well in the large majority of cases, and is far more accurate than alternative methods which have been proposed for neural network Ansatze.
Other QMC methods have also been applied to some of these systems. In particular, the QMC-CIPSI method has been successfully applied to computing the vertical excitation energies of the \({}^{1}A_{2}\) state in formaldehyde and thioformaldehyde to within chemical accuracy, using a conventional Slater-Jastrow Ansatz [56]. While the QMC-CIPSI method cannot be applied to neural network Ansatze, this suggests that good results can still be achieved with VMC with a simple Ansatz, and that the benefit of using NES-VMC relative to the penalty method in Entwistle _et al._ is due to the method rather than the choice of Ansatz.
### Carbon Dimer
In addition to computing observable quantities, it is also desirable to be able to say something about the _nature_ of different excited states - whether a state is a valence or Rydberg or charge transfer excitation, what its symmetries are, whether it is a single or double excitation, etc. As a benchmark system for demonstrating the ability of NES-VMC to characterize different states, we study the carbon dimer (C\({}_{2}\)). Despite its small size, the carbon dimer has a complicated electronic structure with a large number of low-lying excited states [60; 47; 59]. Due to the existence of very strong bands in the visible spectrum, the carbon dimer is frequently detected in astrophysical measurements, and can be observed in comets rich in organic materials [61]. The exact bond order of C\({}_{2}\) is still a subject of some controversy - while molecular orbital theory would classify it as a double bond, valence bond calculations suggest it may be better described as a quadruple bond [62]. And the carbon dimer is one of the smallest molecules to have low-lying double excitations, a class of excited state which other methods often struggle with [48]. Correctly reconstructing the potential energy curves for different low-lying states requires correctly disentangling and characterizing these different states at different geometries.
We compute the 8 lowest-lying states of the carbon dimer at several different bond lengths using NES-VMC and the Psiformer Ansatz, and present the results in Figs. 3 and 4. At equilibrium (1.244 A), we classify the different states by computing their spin magnitude and their symmetries - both parity symmetry (u/g) where the electron positions \(\mathbf{r}\) are replaced by \(-\mathbf{r}\) and reflection symmetry (+/-) where the positions are flipped on the x-axis. We do not compute the orbital angular momentum operator, but confirm that we see the expected degeneracy, for instance \(\Pi\) states are doubly degenerate (one of each reflection symmetry). The oscillator strengths show several bright transitions, which we show in Fig. 3b. Due to the degeneracy of the \(\Pi\) states, we add the oscillator strengths together to give the total strength. We correctly identify the Phillips and Ballik-Ramay systems [63; 64], as well as the unnamed \(B^{1}\Delta_{g}\to A^{1}\Pi_{u}\) transition. We also find that the energy of the \(B^{1}\Delta_{g}\) energy closes matches the TBE in QUEST [48]. The \(A^{1}\Pi_{u}\), \(c^{3}\Sigma_{u}^{+}\) and \(b^{3}\Sigma_{g}^{-}\) states all have nearly the same energy, so correctly identifying the oscillator strengths for these transitions is very challenging.
To better understand the nature of each excited state, we compute the occupancy of the different natural orbitals. We first compute the one-electron reduced density matrix (1-RDM) for each single-state Ansatz in a large basis set and then diagonalize these matrices to find the natural orbitals, as described in more detail in Sec. C.2. In this case, using the Hartree-Fock orbitals as the basis set, we find that all 1-RDMs are nearly diagonal, that is the natural orbitals closely match the Hartree-Fock molecular orbitals. We see in Fig. 3d that all states above the ground state involve excitation of electrons into the \(2p_{z}\sigma_{g}\) orbital. The \(\Pi\) states are well-described by single excitations from one of the \(2p\pi_{u}\) orbitals while the \(c^{3}\Sigma_{u}^{+}\) state promotes an electron from the \(2s\sigma_{u}^{*}\) orbital. Finally, both the \(b^{3}\Sigma_{g}^{-}\) and \(B^{1}\Delta_{g}\) states are double excitations of the \(2p\pi_{u}\) electrons into the \(2s\sigma_{u}^{*}\) orbital, as expected. Not only is NES-VMC able to predict double excitation energies correctly, but by having an explicit functional form
Figure 4: Potential energy curves of the low-lying excited states of C\({}_{2}\) which can be uniquely identified from their symmetries. Complete numerical results are given in
for the wavefunction Ansatz, we can compute quantities such as the reduced density matrices which allow us to derive insight about the nature of electronic excitations.
Predicting experimental excitation energies requires computing the energy difference between different states in their respective lowest energy configurations, the so-called _adiabatic_ excitation energy. To compute this for C\({}_{2}\), we repeated the equilibrium calculations at a number of different bond lengths. Wherever possible, we matched the energy levels at different geometries to the appropriate states based on the same symmetries as in Fig (a)a, and for five states we were able to reconstruct enough of the potential energy curve to identify the minimum energy for each. The results are shown in Fig. 4, smoothed by cubic interpolation. Taking the difference between the minimum energies of each state gives an estimate of the adiabatic excitation energy, which we show in purple in Fig. (b)b, and in 3 out of 4 cases we matched the experimental energy[47] to within roughly 0.01 eV. We did not estimate the zero-point vibrational energies, but believe this may explain the discrepancy in the \(c^{3}\Sigma_{u}^{+}\) state. This shows that not only can NES-VMC match other theoretical calculations of vertical excitation energies, but can predict experimental results to high accuracy.
### Twisted Ethylene
The excited states of ethylene (C\({}_{2}\)H\({}_{4}\)) across its potential energy surface present a challenging benchmark problem for many methods. As the torsion of the carbon double bond is increased, an avoided crossing occurs when the torsion angle is \(90^{\circ}\). Even for ground state calculations, DFT and single-reference coupled cluster calculations predict an unphysical cusp at this location[67]. Starting from the \(90^{\circ}\) torsion and bending the hydrogen atoms on one side inward (so-called "pyramidalization"), ethylene undergoes a conical intersection where the ground state transitions from a \(\pi\) to \(\pi^{*}\) highest occupied orbital (the \(N\) and \(V\) states, with term symbols \({}^{1}A_{g}\) and \({}^{1}B_{1u}\)). Modeling this conical intersection requires fundamentally multireference methods, and while time-dependent density functional theory (TD-DFT) struggles with this system[68], multireference configuration interaction (MR-CI) methods describe it well[58].
We compute the excited states of ethylene as the torsion angle is varied from \(0^{\circ}\) to \(90^{\circ}\), followed by variation of the pyramidalization angle from \(0^{\circ}\) to \(120^{\circ}\), enough to include the conical intersection of the \(N\) and \(V\) states. We try to match the geometry from previous MR-CI studies[58] as closely as possible. Results are shown in Fig. 5. There are also several low-lying triplet states of ethene, the \({}^{3}B_{1u}\) and \({}^{3}B_{3u}\) states, and so we calculated \(K=3\) excited states for all geometries, which we found was enough to find two singlet states for all geometries except at equilibrium, where we used \(K=5\) and took the highest state, as the \({}^{1}B_{3u}\) state has lower energy exclusively at equilibrium. We tried both the FermiNet and Psiformer, and did not find any significant difference in the results, so we show the Psiformer results here (though FermiNet results are included in Table 6). For comparison, in addition to TD-DFT[57] and MR-CI, we also compare against the PauliNet penalty method[18]. For consistency, we show the PauliNet penalty method without variance matching, though the difference is not large. All results are normalized so that the ground state energy at
Figure 5: Excited states and conical intersection of ethylene (C\({}_{2}\)H\({}_{4}\)). Our results (blue) are compared against TD-DFT[57] (purple), MR-CI[58] (green) and a penalty method used with the PauliNet[18] (red). The best estimate of the location of the conical intersection of the V and N states for each method is given by the vertical line in Fig. (b)b. Our method is in close agreement with MR-CI up to a constant shift, and agrees with the location of the conical intersection better than the PauliNet penalty method. Note that the \(\phi=0\) geometry in Fig. (b)b differs slightly from the \(\tau=90\) geometry in Fig. (a)a, as in Barbati _et al.[58]_. Complete numerical results are given in Table 6.
the equilibrium geometry is 0.
Qualitatively, the results from NES-VMC closely match MR-CI. The spurious cusp when the torsion angle is 90\({}^{\circ}\) is avoided, and the error in the ground state relative to MR-CI is smaller than for the PauliNet penalty method across torsion angles. The non-parallelity error in the \(V\) state relative to MR-CI is lower for our method than the PauliNet penalty method, and our predicted location for the conical intersection (\(\sim\)97.5 degrees) is closer to the MR-CI value (\(\sim\)96 degrees) than the predicted PauliNet penalty method value (\(\sim\)100 degrees). There is a nearly constant shift in the energy of the \(V\) state on the order of several tenths of an eV relative to MR-CI, and a shift in the energy of the \(N\) state which grows as the pyramidal angle grows. Increasing the number of excited states and using a different Ansatz did not seem to make a difference. We note that when using the equilibrium geometry for ethylene from QUEST in Sec III.2 as opposed to the geometry from MR-CI, our results agreed with the theoretical best estimates to within chemical accuracy. The overall agreement with experimentally relevant quantities like the location of the conical intersection is in excellent agreement with other highly accurate theoretical studies, and so we are confident that NES-VMC is able to capture the important behavior of this system across the potential energy surface.
### Benzene
Finally, as a demonstration of the ability of our method to scale to larger systems, we applied NES-VMC with both the FermiNet and Psiformer to benzene. Benzene is a common benchmark for excited state methods for medium-sized molecules, so there is abundant data for us to compare against. For VMC, in addition to the penalty method of Entwistle _et al._[18], there is also the penalty method of Pathak _et al._[17], which is used in combination with a traditional Slater-Jastrow Ansatz, and uses a different form of the penalty function which allows for unbiased gradients. On top of VMC results and coupled-cluster-based TBEs from QUEST, we also compare against CASPT2[66] and TD-DFT with the PBE0 functional[65]. Results are shown in Fig. 6, with complex numerical results in Table 7. For our calculations, we used the same geometry as in QUEST[50].
As can be seen in Fig 5(a), NES-VMC with the Psiformer comes very close to reaching the TBE for all computed states. The FermiNet is not quite as accurate, and struggles with the highest energy \({}^{3}B_{2u}\) state. Inspection of the spin magnitude reveals that the highest excited state of the FermiNet converges to a mixture of a triplet and singlet state, which suggests that contamination from the \({}^{1}B_{1u}\) state is affecting the performance. The Psiformer is known to be much more accurate for ground state calculations on systems as large as benzene[38], so it is not surprising that the Psiformer is also better suited for computing the relative energy between states at this
Figure 6: Excited states of benzene. The NES-VMC results (green and blue) are compared against theoretical best estimates from QUEST[50; 54] alongside TD-DFT-PBE0[65], CASPT2[66], DMC with a Slater-Jastrow Ansatz and penalty method[17], and the PauliNet with a penalty method[18]. NES-VMC with the Psiformer Ansatz is competitive with state-of-the-art methods. All excitations are \(\pi\rightarrow\pi^{*}\) excitations, and the orbitals involved are visualized in Fig 5(b). Complete numerical results are given in Table 7.
scale. CASPT2 and TD-DFT methods are less accurate across the board, though this is not surprising as density functional methods are generally less accurate than wavefunction methods, and CASPT2 is generally intermediate in accuracy between DFT and coupled cluster. The penalty method of Entwistle _et al._ in combination with the PauliNet was only applied to the lowest excited state, and even on that, it only reaches CASPT2-level accuracy, even with variance matching (which we do not use in NES-VMC). The penalty method of Pathak _et al._, however, is much more accurate, generally reaching comparable levels of accuracy to NES-VMC with the Psiformer. We suspect this is due to the unbiased gradients in the method of Pathak _et al._. Additionally, the results reported in Pathak _et al._ include a diffusion Monte Carlo correction, which we omit, though this only reduces the root mean squared error by \(\sim\)0.1 eV. We note that NES-VMC with a sufficiently expressive Ansatz not only reaches levels of accuracy near coupled cluster, but does so without any free parameters to tune, unlike penalty methods.
To better understand the nature of the excitations computed, we inspected the density matrices of the respective states, similarly to the analysis of the carbon dimer in Sec III.3 and Fig. 3c. The natural orbitals are well described by the Hartree-Fock orbitals, and so the density matrices in the Hartree-Fock basis are all nearly diagonal. All five excited states for benzene we computed are single excitations from a \(\pi\) to \(\pi^{*}\) orbital, but interestingly, in the natural orbital picture, they are best described by exciting half an electron from two distinct \(\pi_{g}\) orbitals into two distinct \(\pi_{u}^{*}\) orbitals. These orbitals are visualized in Fig 6b. The ability to easily evaluate and analyze properties of wavefunctions other than just the energy is one of the advantages of explicitly computing the functional form of the wavefunction in VMC. Overall, our results on benzene show that NES-VMC can be usefully applied to larger systems and still produce accurate results, so long as the correct Ansatz is used.
## IV Discussion
We have presented a novel method for calculating excited state properties of quantum systems by variational quantum Monte Carlo (VMC), the Natural Excited States method (NES-VMC). NES-VMC has no free parameters to tune, and allows for unbiased estimation of energies and gradients, by reformulating a state-averaging approach as the problem of finding the ground state of an extended system. In much the same way that sampling from \(\psi^{2}\) is the natural way to compute ground state properties by VMC, we believe that NES-VMC is the natural variational principle for computing excited state properties. Additionally, it dovetails well with recent work on neural network Ansatze for many-body systems.
We have demonstrated the effectiveness of NES-VMC on a number of benchmark problems ranging from small atoms and molecules up to the benzene molecule. In all cases, NES-VMC is competitive with theoretical best estimates using coupled cluster methods for estimating energies and oscillator strengths, and can capture the behavior of double excitations and conical intersections. The optimized Ansatz can be used in downstream analyses to characterize the nature of the electronic structure of different excited states. We are confident that NES-VMC is as effective as any other method for computing excited states with QMC, with the added benefit of simplicity and generality.
Neural network Ansatze can be quite computationally expensive, which puts an upper limit on the scale of systems we considered. We believe that recent work on scaling and accelerating neural network Ansatze for many-electron problems [41] can be usefully applied to NES-VMC as well, which could allow these methods to be applied to problems for which no satisfactory solution exists today. While we focused on applications using neural network Ansatze, classic Ansatze like the Slater-Jastrow Ansatz can be scaled to much larger systems [25; 69]. Although our results suggest that more accurate Ansatze are quite important for achieving good performance, we look forward to finding out how well NES-VMC works in conjuction with these classic Ansatze on large problems.
Finally, while our experiments in this paper focused on molecular systems, that is, many-electron problems with open boundary conditions, NES-VMC is fully general and can be applied to _any_ quantum Hamiltonian. Excited state calculations with QMC are an important tool for studying nuclear physics [6], optical band gaps in condensed matter physics [13; 70], many properties of spin systems, as well as time dynamics and finite temperature phenomena. We are excited to see how NES-VMC can be applied to many of the most challenging open problems in many-body quantum mechanics in the future.
###### Acknowledgements.
The authors would like to thank Matthew Foulkes, Denis Jacquemin, Michael Bearpark, Aron Cohen and Alex Gaunt for helpful discussions, and James Kirkpatrick, Annette Obika, Ali Eslami and Pushmeet Kohli for support.
|
2309.03995 | First-principle Study of Multiple Metastable Charge Ordering States in
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ | La doped SrFeO$_{3}$, La$_{1/3}$Sr$_{2/3}$FeO$_{3}$, exhibits a
metal-to-insulator transition accompanied by both antiferromagnetic and charge
ordering states along with the Fe-O bond disproportionation below a critical
temperature near 200K. Unconventionally slow charge dynamics measured in this
material near the critical temperature shows that its excited charge ordering
states can exhibit novel electronic structures with nontrivial energy profiles.
Here, we reveal possible metastable states of charge ordering structures in
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ using the first-principle and climbing image
nudged elastic band methods. In the strong correlation regime,
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ is an antiferromagnetic insulator with a charge
ordering state of the big-small-big pattern, consistent with the experimental
measurement of this material at the low temperature. As the correlation effect
becomes weak, we find at least two possible metastable charge ordering states
with the distinct Fe-O bond disproportionation. Remarkably, a ferroelectric
metallic state emerges with the small energy barrier of $\sim$7 meV, driven by
a metastable CO state of the small-medium-big pattern. The electronic
structures of these metastable charge ordering states are noticeably different
from those of the ground-state. Our results can provide an insightful
explanation to multiple metastable charge ordering states and the slow charge
dynamics of this and related oxide materials. | Nam Nguyen, Alex Taekyung Lee, Vijay Singh, Anh T. Ngo, Hyowon Park | 2023-09-07T19:58:28Z | http://arxiv.org/abs/2309.03995v1 | First-principle study of multiple metastable charge ordering states in La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\)
###### Abstract
La doped SrFeO\({}_{3}\), La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\), exhibits a metal-to-insulator transition accompanied by both antiferromagnetic and charge ordering states along with the Fe-O bond disproportionation below a critical temperature near 200K. Unconventionally slow charge dynamics measured in this material near the critical temperature [Nature Communications, **9** 1799 (2018)] shows that its excited charge ordering states can exhibit novel electronic structures with nontrivial energy profiles. Here, we reveal possible metastable states of charge ordering structures in La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) using the first-principle and climbing image nudged elastic band methods. In the strong correlation regime, La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) is an antiferromagnetic insulator with a charge ordering state of the big-small-big pattern, consistent with the experimental measurement of this material at the low temperature. As the correlation effect becomes weak, we find at least two possible metastable charge ordering states with the distinct Fe-O bond disproportionation. Remarkably, a ferroelectric metallic state emerges with the small energy barrier of \(\sim\)7meV, driven by a metastable CO state of the small-medium-big pattern. The electronic structures of these metastable charge ordering states are noticeably different from those of the ground-state. Our results can provide an insightful explanation to multiple metastable charge ordering states and the slow charge dynamics of this and related oxide materials.
## I Introduction
Charge ordering (CO) or charge density wave (CDW) is an intriguing material property driven by a spontaneous symmetry breaking of the periodicity in crystals. In strongly correlated materials, the charge degree of freedom is typically coupled to other degrees of freedom including spin, orbital, or lattice. While the origin of CDW can be purely electronic and the electronic correlation plays an important role, it is often accompanied by structural distortions such as the bond-order or the Peierls transition, possibly leading to ferroelectricity. Indeed, the combination of CDW, spin density wave (SDW), and the bond order has been proposed as the mechanism of ferroelectricity [1; 2; 3].
La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) (LSFO) is a transition metal oxide with a perovskite structure undergoing a weakly first-order transition at a temperature, \(T\)=200K, from a paramagnetic metallic state with the average valence state of Fe\({}^{3.67+}(d^{4.3})\) at a high temperature to an antiferromagnetic (AFM) insulating state with a CO sate of Fe\({}^{3+}(d^{3})\): Fe\({}^{5+}(d^{3})\)=2:1 at a low temperature [4]. Structural properties of LSFO with or without CO phases have been characterized experimentally using X-ray diffraction, neutron diffraction, and electron microscopy. The studies of X-ray and neutron diffraction [5; 6; 7; 8; 9; 10; 11] showed that bulk LSFO forms a rhombohedral structure in the space group of \(R\bar{3}c\) (see Fig. 1) with the lattice constants \(a=5.47\) A and \(c=13.35\) A. A sign of the CDW spanning the periodicity of three Fe ions accompanied by SDW with a periodicity of six Fe ions was measured along the pseudocubic [111] direction, but there was no clear evidence of structural distortions. Later, the electron microscopy study by Li _et al._[12] revealed a structural distortions along the pseudocubic [111] direction in the real space upon the CDW transition. Finally, the neutron diffraction studies by Sabyasachi _et al._[6] and Yang _et al._[8] also showed a possibility of the meta-stable CO state due to multiple neutron peaks below the critical temperature.
Electronic properties of LSFO at the low-temperature CO phase haven been characterized by various experiments. The study of optical spectroscopy by Ishikawa _et al._[13] showed the optical gap of LSFO was about 0.13 eV at low temperature. The studies of Mossbauer spectrocopy [5; 6; 7; 8; 9; 11; 14; 15] captured two kinds of Fe ions with different hyperfine fields, confirming the charge disproportionation below the critical temperature. Recent ultrafast X-ray measurement in LSFO by Zhu _et al_ has shown that the noticeable slowdown occurs during the relaxation of CO near the critical temperature [16]. They argued that the photoexcitation due to an ultrafast pump can drive a ground state of La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) into metastable states with different spin/charge orderings, which can be the origin of slowdown in the relaxation process. According to Yamamoto _et al._, [17] these metastable or transient states are the CO in sequence of Fe\({}^{4+}\)Fe\({}^{3+}\)Fe\({}^{4+}\). However, the magnetic moments as well as the spin states, i.e. high spin (HS) or low spin (LS), of these Fe\({}^{4+}\) and Fe\({}^{3+}\) ions were unknown. In general, the slow dynamics of CO can be originated from the multiple meta-stable CO states accessible during the relaxation process.
Unlike those various experimental characterizations, theoretical studies of LSFO have been rather limited. The Hartree-Fock study by Matsuno _et al_[18] captured an energy gap of 0.14 eV, which was in a good agreement with the experimental gap at low temperature. The first-principle study of density functional theory plus the Hubbard \(U\) (DFT+\(U\)) by Zhu _et al._[16] and Saha
Dasgupta _et al._[19] verified the presence of structural modulation or oxygen breathing distortions accompanied by CO of Fe ions in a sequence of Fe\({}^{3+}\)Fe\({}^{5+}\)Fe\({}^{3+}\). They also found that another sequence of CO is possible, namely Fe\({}^{4+}\)Fe\({}^{3+}\)Fe\({}^{4+}\). These CO states are strongly coupled to the spin states as the Fe ion with a larger charge state shows the high-spin state with the Fe-O bond elongation. Finally, the possibility of ferroelectricity in La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) was pointed out by Park _et al._ by rearranging the La/Sr layers[20]. Nevertheless, the effect of electronic correlations on the stability of CO states and the emergence of novel metastable states, such as ferroelectricity, which can be accessible from the photo-excitation experiment, have not been studied from first-principle.
In this work, we study the effect of electron correlations on structural and electronic properties of LSFO having the strong charge-spin-lattice coupling by adopting the first-principle DFT+\(U\) method. In particular, we explore possible meta-stable CO phases driven by a new pattern of structural distortions by adopting the climbing image nudged elastic band (CINEB) method along with DFT+\(U\). Remarkably, we find a new electronic phase in LSFO exhibiting the ferroelectricity driven by a small-medium-big CO pattern and a distinct Fe-O bond disproportionation with the small-medium-big magnetic moments. This new meta-stable phase has almost the degenerate energy compared to previously known CO phases with a small energy barrier of \(5\sim 7\) meV, implying the promising tunability of this material as a future electronic device.
## II Methods
### First-principle calculation
To perform the structural relaxation and the band structure calculations of LSFO, we adopt DFT+\(U\)[21] based on the projected-augmented wave (PAW) method[22] as implemented in the Vienna _ab initio_ simulation package (VASP)[23; 24]. The exchange-correlation energy functional was treated using generalized gradient approximation (GGA) by adopting the Perdew-Burke-Ernzerhof (PBE) functional[25]. The cutoff energy for the plane-wave basis was used as 600 eV, and the Gamma-centered 8\(\times\)8\(\times\)2 \(k-\)point mesh[26] was used for all calculations. For structural relaxations, the Hellmann-Feynman force[27] on each atom was set to be smaller than 0.01 eV/A. To treat the correlation effect of Fe \(d\) orbitals, we impose the Hubbard \(U\) and the Hund's coupling \(J\) within DFT+U.
As noted from the previous study of Ref. [16], two distinct CO structures (CO1 and CO3, see Fig. 2) can be obtained in LSFO by relaxing the crystal structure imposing different \(U\) values on the Fe ions. For the ground-state CO1 structure, we used \(U\)=5eV and \(J\)=1eV, while a distinct CO3 structure is obtained using \(U\)=3eV and \(J\)=0.6eV. Both the crystal shape and ionic positions were relaxed during the structural relaxation, while the crystal volume of LSFO was fixed to 329.24 A\({}^{3}\). To obtain a meta-stable CO phase (CO2, see Fig. 2), we adopt the CINEB method along with the DFT+\(U\) using \(U\)=3.62eV (see Sec. III.2). We also explore the effect of \(U\) values (\(J=0.2U\)) on the stability of different CO phases (see Sec. III.3).
### Energy calculation along a structural path
To obtain the minimum energy curve along a structural path and explore possible metastable structures, we adopt the CINEB method along with DFT+\(U\). The nudged elastic band (NEB) method is an efficient tool for finding the minimum energy path between two stable structures, i.e. a given initial (reactant) and final (product) state[28; 29]. The CINEB method is a small modification of the NEB method without adding any significant computational method[30]. The CINEB method yields a rigorous convergence of a saddle point, which has a maximum energy along the band but minimum in the other directions. The other images in the band serve for the purpose of defining one degree of freedom for which the energy of the climbing image is maximized along the band. In this work, we adopt the CINEB method to explore metastable CO states with distinct structural distortions following a computed structural path and compute the energy barrier along the path. We obtain the structural path by defining two stable CO structures relaxed with different initial conditions and constructing an energy path between two structures using the CINEB method.
### Order Parameter
While ferroelectricity is a phenomenon driven by the spontaneous polarization of materials, the polarization calculation in a periodic system requires a careful treatment of the formula[31]. At the same time, the inversion symmetry breaking of a structure is a clear indication of the spontaneous polarization. While we are not interested in obtaining the quantitative value of the polarization in this work, we will investigate the displacements of Fe and O planes in the rhombohedral unit cell along the [111]\({}_{c}\) direction (Figure 1), where the Fe plane distortion occurs below the critical temperature.
The displacements of Fe and O planes were investigated in the following way. First, we confirm that the CO1 and CO3 structures are centrosymmetric and define the central plane \(C\) as the midway between Fe1 and Fe6 planes. Next, we generated the other dashed-line planes which are equidistant and correspond to the Fe and O planes of the undistorted high-temperature structure (see Figure 2). Then, we can quantify how much Fe and O planes are displaced from the dashed lines. We de
fine the total displacements per unit cell (\(\Delta_{tot}\)) for these Fe and O planes (see Table 1). For the CO2 structure, the \(\Delta_{tot}\) is finite due to the inversion symmetry breaking, also implying the emergence of ferroelectricity.
## III Results and Discussions
### Structural Relaxations
The study of the neutron diffraction measurement by Battle _et al_[5] at the room temperature showed that bulk LSFO forms a rhombohedral structure of the space group \(R3c\) (Figure 1) with lattice constants of \(a=5.47\) A and \(c=13.35\) A. The rhombohedral unit cell of LSFO has 30 atoms including two La, four Sr, six Fe, and eighteen O ions. The c-axis of the rhomboheral unit cell is equivalent to the \([111]_{c}\) direction of the pseudocubic one, which is conventionally adopted in literatures. Thus, the cubic \([111]_{c}\) direction will be also adopted in this paper. Above the CO critical temperature, all Fe ions are equivalent and have the same Fe-O bond lengths. As the temperature is lowered below T\({}_{CO}\), both SDW and CDW orders develop along the \([111]_{c}\) direction. While the CDW order spans the periodicity of three Fe ions, the antiferromagnetic SDW repeats in the unit-cell of six Fe ions, which is commensurate with the crystal lattice periodicity. As a result, the space group of the crystal structure is lowered to \(P\bar{3}m1\) (trigonal, No. 164) with the point group symmetry of \(D_{3d}\) while the crystal remains centrosymmetric. Here, we find that three distinct CO phases can be stable in LSFO with the same commensurate modulations of the SDW and CDW, and the stabilities of these CO structures are dependent on the electronic correlation effect (the Hubbard \(U\) values).
As already noted from Ref. [16], two distinct centrosymmetric CO structures (CO1 and CO3, see Fig. 2) can be obtained by relaxing them with different Hubbard \(U\) values in DFT+\(U\). To explore other metastable CO phases and the energy barriers, these two CO1 and CO3 structures will be used for two reference structures in the CINEB method. The first stable structure CO1 (charge ordering 1) was obtained in a strongly correlated regime with \(U\)=5 eV and \(J\)=1 eV, which have been used for LSFO in literatures [16; 19; 20]. Another stable structure CO3 was obtained in a weakly correlated regime [19] with
Figure 2: The schemetics of Fe magnetic moments and the displacements of Fe/O planes for different CO1, CO2, and CO3 phases along the \([111]_{c}\) direction. The displacements are the changes of atomic positions from their undistorted structures (grey dash lines). The central plane C (grey solid line) is midway between Fe1 and Fe6 planes.
Figure 1: The crystal structure of La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) in a rhombohedral unit cell. The rhombohedral c-axis is equivalent to the \([111]_{c}\) direction of the cubic unit cell.
\(U\)=3 eV and \(J\)=0.6 eV, which was also used by Zhu _et al_[16]. Both CO1 and CO3 structures exhibit a sixfold (six Fe ions) spin density wave (SDW) along the cubic [111]\({}_{c}\) direction, such that Fe1(\(\uparrow\))Fe2(\(\uparrow\))Fe3(\(\uparrow\))Fe4(\(\downarrow\))Fe5(\(\downarrow\))Fe6(\(\downarrow\)) (Figure 2).
We find that the \(<\)Fe-O\(>\) mean bond-lengths are closely related to the magnetic moments. In the CO1 structure, the magnetic moments of Fe1(Fe4) and Fe3(Fe6) are larger than the one of Fe2(Fe5), so the charge states of Fe1 and Fe3 should be larger than the one of Fe2 (see Section III.4). As a result, the high-spin (HS) Fe-O bond expands and the bond-disproportionation occurs. Particularly, in the CO1 structure the \(<\)Fe-O\(>\) mean bond-lengths of Fe1-Fe2-Fe3 (similar to Fe4-Fe5-Fe6) show the big-small-big pattern, coupled to the big-small-big magnetic moments of Fe1, Fe2, and Fe3 ions respectively (Fig. 2). The \(<\)Fe-O\(>\) mean bond-lengths of CO1 are 1.92 A for Fe1 and 1.86 A for Fe2, respectively.
In the case of CO3, the magnetic moments of Fe1 (Fe4) and Fe3 (Fe6) ions become smaller than Fe2 (Fe5) ion and the bond-length changes to small-big-small. The bond-lengths are 1.88 A for Fe1 and 1.94 A for Fe2. As a result of the Fe-O bond-length disproportionation, the displacements of Fe and O planes are also non-uniform as shown in Fig. 2.
In Table 1, we list the relaxed unit-cell parameters, the space group, the displacements of Fe planes, and the total displacement of Fe and O planes (\(\Delta_{tot}\)) along the [111]\({}_{c}\). Both of CO1 and CO3 have the space group of \(P\bar{3}m1\), implying CO1 and CO3 are centrosymmetric. Also, based on the displacements of Fe and O planes of CO1 and CO3, the Figure 2 shows that the reflection of the supercells, including Fe1O\({}_{6}\)-Fe6O\({}_{6}\) cells, of CO1 and CO3 around the central plane C yields the same supercells respectively. Finally, the total displacements of Fe and O planes in CO1 and CO3 are also zeros (Table 1), meaning polarization is not induced in CO1 and CO3.
### Energy Path along Multiple Charge Orderings
In this section, we compute the energy path between two energetically degenerate CO phases (CO1 and CO3) to explore the possible metal-stable CO states along the structural path. We first tuned \(U\) (\(J\)=0.2\(U\)) for CO1 and CO3 phases while relaxing crystal structures at a fixed volume to investigate their stability and plot the relative energy \(\Delta\)E\({}_{CO1-CO3}\)(\(=E[\)CO\(1]-E[\)CO3\(]\)) between them as a function of \(U\) in Fig. 3(a). Here, we find that the low-temperature experimental ground-state structure (CO1) is stable when the \(U\) value becomes larger than 3.7 eV (see Fig. 5(a)). While DFT+\(U\) is a zero-temperature theory, we find that the energetics of different CO phases can be tuned by changing the onsite Coulomb repulsion \(U\), which can mimic the effect of temperature, pressure, or photoinduced excitation. In principle, laser excitation in experiment can modulate the electronic structure from the ground-state, by affecting the exchange interaction [32], and may eventually trigger a phenomenon called "photoinduced structural phase transition" [33].
Fig. 3(a) shows that the energy difference between CO1 and CO3 structures can be almost zero at \(U_{c}=3.62\) eV (\(J=0.724\) eV). This means that other meta-stable CO structures could be found in the CINEB calculation near \(U=3.62\)eV. At \(U\)=3.62 eV, CO1(CO3) still has the big-small-big (small-big-small) bond-order (see Fig. 5). Here, we perform a CINEB calculation at \(U=3.62\) eV using both CO1 and CO3 as two reference structures. Remarkably, Fig. 3(b) shows that the CINEB energy curve calculated with \(U_{c}\)=3.62 eV can capture a meta-stable structure of CO2, whose energy is only 3meV above the CO1 or CO3 structure with the energy barrier of \(\sim\)7meV. This CO2 structure is obtained by the spontaneous dis
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline CO phase & c (Å) & a (Å) & \(\Delta_{Fe1}\)/\(\Delta_{Fe3}\)/\(\Delta_{Fe2}\) [Å] & \(\Delta_{tot}\) [Å] \\ \hline \hline CO1 & 13.12 & 5.38 & -0.01/0.01/0.00 & 0.00 \\ CO3 & 13.22 & 5.36 & 0.03/-0.03/0.00 & 0.00 \\ \hline CO2 & 13.19 & 5.37 & 0.01/-0.04/-0.03 & -0.22 \\ CO2 & 13.19 & 5.37 & 0.04/-0.01/0.03 & 0.22 \\ \hline \end{tabular}
\end{table}
Table 1: The relaxed cell parameters (\(a\) and \(c\)), the space group, the Fe plane displacements, and the total displacement (\(\Delta_{tot}\)) of LSFO at each CO phase. Both CO1 and CO3 structures (space group: \(P\bar{3}m1\)) have the inversion symmetry, while CO2 and CO2 structures (space group: \(P3m1\)) do not.
Figure 3: (a) The relative energies per formula unit of CO1, CO2, and CO3 phases as a function of the Hubbard \(U\). (b) Comparison of CINEB and Linear Interpolation energies vs Image structures calculated with DFT+U at \(U\)=3.62 eV and \(J\)=0.724 eV.
placement of the Fe plane and it can not be captured by the linear interpolation method where the image structures along the path are obtained by linearly interpolating atomic positions between CO1 and CO3.
The obtained CO2 structure has the small-medium-big FeO\({}_{6}\) octahedra (Fe-O bond order), (see Fig. 5(b)), coupled to the magnetic ordering of 2.2 (small), 3.0 (medium), and 3.4 \(\mu_{B}\) (big). Unlike CO1 and CO3, CO2 has the space group of \(P3m1\) (trigonal, No. 156) with the point group symmetry of \(C_{3v}\), which belongs to a polar point group [34]. The reflection of CO2 supercell around the C plane does not yield the same supercell (see Fig. 2), implying the broken inversion symmetry in CO2. Also, the total displacement (\(\Delta_{tot}\)) of Fe and O planes in CO2 is not zero (see Table 1), resulting the spontaneous polarization. Remarkably, the CO2 phase is metallic, which is not common for the ferroelectric materials due to the screening of the polarization [35; 36]. We also find that the other meta-stable CO2 structure can be obtained by applying the inversion operation to the CO2 structure about the central plane in Fig. 2 and shows the opposite polarization compared to the CO2 case (see Table 1). The polarizations between CO2 and CO2 are easily switchable due to the very small energy barrier of \(\sim\)7meV between them. The existence of ferroelectricity in LSFO was first pointed out by Park _et al._[20], but its structure was obtained manually by exchanging the La and Sr layers.
To address the difference between the CINEB and Linear Interpolation results, we compare the displacement (\(\Delta_{Fe2}\)) of Fe2 plane, the total displacement (\(\Delta_{tot}\)) of Fe and O planes, and the bond angle along O1-Fe2-O2 (\(\angle\)O1-Fe2-O2). Figure 4(a) shows that along the Linear Interpolation path the displacement of Fe2 plane \(\Delta_{Fe2}^{LI}\) and the total displacement of Fe and O planes \(\Delta_{tot}^{LI}\) are kept zero, while along the CINEB path an abrupt change of the displacement of Fe2 plane \(\Delta_{Fe2}^{CINEB}\) and the total displacement \(\Delta_{tot}^{CINEB}\) occurs at image number 5 and reach a minimum at image 9 where CO2 was captured. This change of the Fe2 displacement along the CINEB path is also accompanied by a sudden change of the bond angle (\(\angle\)O1-Fe2-O2)\({}_{CINEB}\), while the one along the Linear Interpolation path (\(\angle\)O1-Fe2-O2)\({}_{LI}\) remains 180\({}^{0}\) (Figure 4(b)). The existences of this CO2 phase might be captured by Sabyasachi _et al._ and Yang _et al._, where the neutron diffraction shows the multiple \(Q\) plane magnetic reflections with equivalent intensities [6; 8].
### The dependence of structural parameters on \(U\)
Our structural relaxation results show that the stability of the different CO phases depends sensitively on the Hubbard \(U\) values (\(J=0.2U\)). In general, the correlation effect of the Hubbard \(U\) is important to stabilize the bond/charge disproportionation in many oxides including nickelates [37; 38], cobaltates [39], ferrites [40], and manganites [41]. This is because only pa
Figure 4: Comparison between the CINEB and the Linear Interpolation paths: (a) The displacement of Fe2 plane and the total displacement of Fe and O planes. (b) The bond angle \(\angle\)O1-Fe2-O2.
Figure 5: The \(<\)Fe-O\(>\) mean bond lengths vs the Hubbard \(U\) (\(J=0.2U\)) obtained for (a) CO1, (b) CO2, and (c) CO3 phases. The vertical dash lines represent the \(U\) values used for stabilizing each CO phase, namely CO1 (\(U=5.0\) eV), CO2 (\(U=3.62\) eV), and CO3 (\(U=3.0\) eV) as shown in Table 1. The white (shaded) regions represent metallic (insulating) phases.
\(M\) sites undergo the spin-state transition to the HS state with the \(M-O\) bond elongation and more HS sites are populated with the stronger \(U\) values. Our calculation confirms that the DFT-relaxed structure of LSFO shows no Fe-O bond disproportionation, consistently as the experimental high-temperature structure, and the increase of \(U\) energetically favors the structures with more HS states in a non-trivial way.
Fig. 5(a) shows that the CO1 structure as shown in Table 1 can be stable only when \(U>3.7\) eV (\(J\)=0.74 eV) and the structural transition to CO3 occurs along with the insulator-metal transition. The CO2 structure as shown in Table 1 is meta-stable in a narrow \(U\) range of \(3.55\leq U\leq 3.7\) eV and evolves into a distorted CO3 phase as \(U\) becomes lower than 3.55 eV. The CO3 structure can be stable in a wide-range of \(U\) values although this phase is energetically lower than CO1 or CO2 phases when \(U\leq 3.62\) eV. Both of CO2 and CO3 structures converge to the high-temperature structure without the Fe-O disproportionation as \(U\) becomes smaller than 2eV. We find that the insulating phase in LSFO occurs only in the CO1 structure with \(U>3.7\)eV.
### Electronic Structure and magnetism in LSFO
Here, we investigate electronic structures of LSFO at different CO states computed using DFT+U. Due to the AFM structure, the Fe1/Fe2/Fe3 density of states (DOS) is equivalent to the Fe4/Fe5/Fe6 one in LSFO once their spins are flipped. For CO1 and CO3, the crystal structures are centrosymmetric and we show only Fe1 and Fe2 DOS since Fe1 (Fe4) and Fe3 (Fe6) are equivalent. To distinguish the importance of electronic correlations from the structure effect, we compare \(U=3.62\) eV (\(J=0.724\) eV) and \(U=4\) eV (\(J=0.8\) eV) DOS at the fixed structure of each CO phase.
At \(U\)=4 eV, the CO1 phase is an insulating state with the spectral gap size of \(\sim\)120 meV (see Fig. 6(a)), consistent with the optical gap measurement in LSFO at a low temperature [13]. In the Fe1 ion, both e\({}_{g}\) and t\({}_{2g}\) bands are half-filled with the gap size comparable to \(U\) behaving as a typical Mott insulator. However, only the t\({}_{2g}\) bands of Fe2 are half-filled, while the e\({}_{g}\) bands are almost empty (see Fig. 6(a)). This is consistent with the high-spin picture of the charge-ordering state between Fe1 (\(d^{5}\); t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{2}\)) and Fe2 (\(d^{3}\); t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{0}\)) ions. As the correlation becomes weaker (\(U\)=3.62 eV), the DOS for CO1 becomes metallic as the Fe1 e\({}_{g}\) (Fe2 t\({}_{2g}\)) state is less (more) occupied and the spectral gap at the Fermi energy is closed.
In CO3 at \(U\)=4eV, the charge-ordering pattern changes for Fe1 (\(d^{4}\); t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0}\)) and Fe2 (\(d^{5}\); t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{1}\)). The spin state for Fe1 changes to the low-spin, while the Fe2 spin is close to the intermediate one. This is because the crystal field splitting of the Fe1 ion becomes larger due to the smaller octahedron size compared to Fe1 in the CO1 phase. As a result, both Fe1 t\({}_{2g}\) and Fe2 t\({}_{2g}\) states are partially filled and the DOS becomes metallic (see Fig. 6(a)). As the correlation becomes weak (\(U\)=3.62 eV), the CO3 phase remains metallic.
Similar to CO3, CO2 is metallic at both \(U\)=4eV and 3.62eV. As the Fe1 \(d\) DOS of CO2 is similar to the Fe1 \(d\) DOS of CO3 and the Fe1-O bond-lengths of CO2 and CO3 are similar to each other as well, we expect that the local electronic configuration of Fe1 should be similarly given as the low-spin \(d^{4}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0}\)). Moreover, the Fe-O bond-length of the Fe1 ion is the smallest, while those of Fe2 and Fe3 ions are close to each other implying the similar electronic structure between Fe2 and Fe3. Nevertheless, the evidence of the CO can be found near \(E\approx-1eV\) where the occupied Fe3 e\({}_{g}\) states have slightly more DOS than the Fe2 one, while their t\({}_{2g}\) DOS are similar. This implies that the local electronic configurations of Fe2 and Fe3 ions should be Fe\({}^{(3.5+)+}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0.5-\delta}\)) and Fe\({}^{(3.5-\delta)+}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0.5+\delta}\)), respectively.
The calculated magnetic moments of Fe ions (\(m_{Fe1}\), \(m_{Fe2}\), and \(m_{Fe3}\)) are coupled to the above valence states and these values for CO1, CO2, and CO3 are shown in Table 2. The magnetic moments in CO1 calculated with DFT+U (\(U\)=4 eV, \(J\)=0.8 eV) are in a good agreement with the experimental ones recently obtained by Li _et al_[11] at low temperature. The calculated value of \(m_{Fe1}\) is rather screened from the electronic configuration estimation based on the DOS since we expect \(m_{Fe1}\) (t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{2}\)) = 5\(\mu_{B}\), while the \(m_{Fe2}\) value is consistent (t\({}_{2g\uparrow}^{3}\)e\({}_{g\uparrow}^{0}\) = 3\(\mu_{B}\)). The expected moments of the Fe1 and Fe2 ions in CO3 are \(m_{Fe1}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{0}\)) = 2\(\mu_{B}\) and \(m_{Fe2}\) (t\({}_{2g\uparrow}^{3}\)t\({}_{2g\downarrow}^{1}\)e\({}_{g\uparrow}^{1}\)) = 3\(\mu_{B}\), respectively. However, since CO3 is metallic and the magnetic moments calculated with DFT+\(U\) also depends on \(U\) and \(J\), our calculated moments of 2.42\(\mu_{B}\) and 3.52\(\mu_{B}\) at \(U\)=4 eV are larger than these expected values. We confirmed that the magnetic moments of Fe1 and Fe2 are reduced at \(U\)=3 eV as 2.08 and 3.14 \(\mu_{B}\), similar to the expected values respectively.
Similarly, the magnetic moment of Fe1 at CO2 calculated with \(U\)=4 eV is 2.70 \(\mu_{B}\), which is still large for a LS state of Fe\({}^{4+}\) (2.0 \(\mu_{B}\)). We find that this moment computed using \(U\)=3.62eV is more consistent as 2.18\(\mu_{B}\). For CO2, the magnetic moments of \(m_{Fe1}\), \(m_{Fe2}\), and \(m_{Fe3}\) show the small-medium-big pattern, which is consistent with the charge-ordering pattern of Fe\({}^{4+}\)-Fe\({}^{(3.5+\delta)+}\)-Fe\({}^{(3.5-\delta)+}\).
## IV Conclusion
In conclusion, we studied the structural and electronic properties of charge-ordered La doped SrFeO\({}_{3}\), La\({}_{1/3}\)Sr\({}_{2/3}\)FeO\({}_{3}\) (LSFO) systematically using DFT+U along with the antiferromagnetic order. We find that metastable structures with distinct CO phases in LSFO can be obtained by relaxing the structures with the different \(U\) values varying the correlation effect. The DFT+U calculation of LSFO with \(U\)=5eV can capture the low temperature CO phase (CO1 in the main text) of the big-small-big pattern, where the enhanced charge density is accompanied by the large magnetic moment with the Fe-O bond elongation. The ground-state is insulating as the spectral function at the Fermi energy opens a Mott gap driven by the high-spin states of Fe ions.
As the correlation effect becomes weak by reducing the \(U\) value in DFT+U, we can capture other metastable CO phases with distinct Fe-O bond patterns. One CO phase (CO3 in the main text) shows the crystal structure with the same space group as the CO1 phase, while the CO pattern changes to small-big-small. The other metastable CO phase (CO2 in the main text) can be obtained by interpolating the structural path between CO1 and CO3 phases using the CINEB calculation. Remarkably, the CO2 phase stabilize a lower symmetry crystal structure along with the inversion symmetry breaking and it shows the ferroelectric metallic state driven by the big-medium-small CO pattern. This CO2 phase can not be captured by the linear interpolation method as it requires the spontaneous displacement of Fe ions at the symmetric points. The electronic structures of these metastable CO states are notably changed as both CO2 and CO3 phases are metallic while the ground-state CO1 phase is insulating. The energy barrier of this CO2 phase along the structural path is only \(\sim\)7meV, which can be easily accessible from experiments by applying the pressure or the optical excitation.
Our results suggest that the strong correlation effect plays an important role to study and stabilize the multiple CO phases of transition metal oxides accompanying the mixed valence and the metal-oxygen bond disproportionation. The CINEB method combined with the energy and force calculations based on the first-principles can capture such metastable CO phases along with the distinct electronic structure from their ground state. While DFT+U is an efficient static method to incorporate the correlation effect, it can generally suffer from the convergence problem in systems with multiple correlated states [39; 42]. More advanced first-principle method such as dynamical mean field theory (DMFT) can be a promising way to study metastable phases in strongly correlated materials driven by both the structural distortion and the strong correlations especially when the CINEB method is combined with the energy and force calculations within DMFT [43; 44].
## Acknowledgement
We thank Yue Cao for fruitful discussions. NN, AL, AN, and HP acknowledge financial support from the US
Figure 6: The DOS plots of CO1, CO2, and CO3 phases calculated with DFT+\(U\). (a) \(U=4.0\) eV and \(J=0.8\) eV and (b) \(U=3.62\) eV and \(J=0.724\) eV. Schematic energy diagrams of Fe t\({}_{2g}\) and e\({}_{g}\) orbitals are also shown in the insets.
Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Science and Engineering Division. VS was supported from NSF SI2-SSE Grant 1740112. We gratefully acknowledge the computing resources provided on Bebop, a high- performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.
|
2303.17763 | Two-component model description of Bose-Einstein correlations in pp
collisions at 13 TeV measured by the CMS Collaboration at the LHC | Using the two-component model, we analyze Bose-Einstein correlations in pp
collisions at the center-of-mass energy of 13 TeV, measured by the CMS
Collaboration at the LHC, and compare results with the $\tau$-model. We utilize
data described by the double ratios with an average pair transverse momentum
$0\le k_T\le 1.0$ GeV and six intervals described by the reconstructed charged
particle multiplicity as $N_{\rm trk}^{\rm offline}$. The estimated ranges are
1-4 fm for the magnitude of extension of emitting source expressed by the
exponential function $\exp(-RQ)$ and 0.4-0.5 fm for that by the Gaussian
distribution $\exp(-(RQ)^2))$, respectively. Moreover, we estimate the upper
limits of the 3-pion BEC to test the two-component model and investigate the
role of the long-range correlation. | Takuya Mizoguchi, Seiji Matsumoto, Minoru Biyajima | 2023-03-31T01:40:58Z | http://arxiv.org/abs/2303.17763v2 | # Analysis of CMS collaboration Bose-Einstein correlations at 13 TeV using the two-component model
###### Abstract
Using the two-component model, we analyze Bose-Einstein correlations (BEC) at 13 TeV, measured by the CMS collaboration, and compare results with the \(\tau\)-model. We utilize data described by the double ratios with [\(0\leq k_{T}\leq 1.0\) GeV and six intervals for \(N_{\rm trk}^{\rm offline}\)]. The estimated range is 1-4 fm for the exponential form \(R_{1}\) and 0.4-0.5 fm for the Gaussian form \(R_{2}\). We estimate the upper limits of the 3-pion BEC to test the two-component model and investigate the role of the long-range correlation.
## 1 Introduction
This article investigates the Bose-Einstein correlations (BEC) described by double ratios (DRs) at 13 TeV, obtained by the CMS collaboration [1]. However, CMS only reports \(\chi^{2}/\)ndf values obtained using the \(\tau\)-model. Here, we analyze the DRs [\(0\leq k_{T}\leq 1.0\) GeV, and six intervals \(a\leq N_{\rm trk}^{\rm offline}\leq b\)] obtained by the \(\tau\)-model, as illustrated in Fig. 1. The formula used by CMS in their analysis [2] is
\[F_{\tau}=C[1+\lambda\cos((r_{0}Q)^{2}+\tan(\alpha_{\tau}\pi/4)(Qr)^{\alpha})e^ {-(Qr)^{\alpha_{\tau}}}]\cdot(1+\delta Q) \tag{1}\]
where \(\lambda\), \(r_{0}\), \(r\), and \(\alpha_{\tau}\) are parameters introduced in the stable distribution based on stochastic theory, namely the degree of coherence, two interaction ranges, and the characteristic index, respectively (see, also Refs. [3, 4]). \(Q=\sqrt{-(k_{1}-k_{2})^{2}}\) is the magnitude of the 4-momentum transfer between two pions.
Our estimated values are presented in Table 1.
Figure 1: Analysis of the BEC at 13 TeV by Eq. (1).
Table 1 shows that the \(\chi^{2}\)/ndf values obtained from our analysis are consistent with those reported by the CMS collaboration [1].
As indicated in Table 1, the interaction ranges of the Levy-type form \((e^{-(Qr)^{\alpha}})\) increase as the interval containing \(N_{\rm trk}^{\rm offline}\) increases. The estimated values \(r=20\sim 50\) fm appear large for \(p+p\) collisions at 13 TeV.
This paper investigates this issue from a different perspective, focusing on the collision mechanism. Three processes occur in collisions at the LHC [5, 6, 7]: the non-diffractive dissociation, the single-diffractive dissociation, and the double-diffractive dissociation (DD). BEC are related to the chaotic components of particle production. Since the contribution from the DD is Poissonian [7], there is no effect to the BEC. Thus we calculated the following two-component model [7, 8] (see also empirical Refs. [9, 10, 11]),
\[{\rm CF}_{\rm II}=1+\lambda_{1}E_{\rm BE_{1}}+\lambda_{2}E_{\rm BE_{2}}. \tag{2}\]
For the exchange functions \(E_{\rm BE_{1}}\) and \(E_{\rm BE_{2}}\), we assign the following two functions [12],
\[\exp(-RQ)\quad\mbox{and}\quad\exp\left(-(RQ)^{2}\right). \tag{3}\]
LRC refers to the long-range correlation. To express the LRC of BEC at 13 TeV, we use the Gaussian form, similar to the analysis of the denominator of the single ratio, \(N_{\rm BG}^{(+-)}\) (see [1]).
\[{\rm LRC}_{\rm(Gauss)}=\frac{C}{1.0+\alpha\exp(-\beta Q^{2})}. \tag{4}\]
In the second section, we analyze the BEC at 13 TeV using Eqs. (2)-(4). In the third section, we present our predictions for 3-pion BEC using the two-component model. In the final section, we provide concluding remarks. Appendix A presents an analysis of BEC at 13 TeV using the \(\tau\)-model with Eq. (4). In Appendix B, we reanalyze the CMS BEC at 0.9 and 7 TeV utilizing Eq. (4), because in previous works [7, 8], we used \({\rm LRC}_{\rm(linear)}=C(1+\delta Q)\).
## 2 Analysis of BEC at 13 TeV using Eqs. (2)-(4).
Considering the results of the CMS BEC at 7 TeV Ref. [7], we assume a combination of exponential functions and Gaussian distribution, as this combination has been shown to play an important role in the analysis of the CMS BEC at 0.9 and 7 TeV [7]. However, Shimoda et al. Ref. [12], investigated a number of distributions. Our results are presented in Fig. 2 and Table 2. We observe extraordinary behaviors in the two intervals [\(0\leq N_{\rm trk}^{\rm offline}\leq 4\) and \(10\leq N_{\rm trk}^{\rm offline}\leq 12\)] of the LRC shown in Fig. 3.
As indicated by Fig. 2 and Table 2, the two-component model with Eqs. (2)-(4) effectively characterizes three intervals: \(31\leq N_{\rm trk}^{\rm offline}\leq 33\), \(80\leq N_{\rm trk}^{\rm offline}\leq 84\), and \(105\leq N_{\rm trk}^{\rm offline}\leq 109\).
Figure 2: Analysis of the BEC at 13 TeV by Eqs. (2)–(4).
Among the six intervals shown in Fig. 3, the red (solid) line and green (dashed) line appear to be exceptional.
## 3 Test of the two-component model for 3-pion BEC
Here, we investigate the 3-pion BEC using the two-component model. Since there is currently no information from CMS on the multiplicity distribution \(P(n)\) at 13 TeV, it is challenging to determine the ratio between the contributions of the first and the second components. We use the diagrams in Fig. 4.
Figure 3: Our LRCs for six intervals are shown.
The formula that corresponds to the diagrams in Fig. 4[13, 14, 15] is expressed as
\[F_{i}^{(3)}=1.0+3\lambda_{i}E_{\rm BE_{i}}+2(\lambda_{i}E_{\rm BE_{i}})^{3/2} \tag{5}\]
By assuming an equal weight for the first and the second components, \(F_{1}^{(3)}\) and \(F_{2}^{(3)}\), we obtain the following normalized expression
\[E^{(3+:3-)}=1.0+\frac{1}{2}\left(3\lambda_{1}E_{\rm BE_{1}}+2(\lambda_{1}E_{ \rm BE_{1}})^{3/2}\right)+\frac{1}{2}\left(3\lambda_{2}E_{\rm BE_{2}}+2(\lambda _{2}E_{\rm BE_{2}})^{3/2}\right), \tag{6}\]
where \(\lambda_{1}\), \(\lambda_{2}\), \(R_{1}\), and \(R_{2}\) are fixed by using the numerical values in Table2. Typical figures are presented in Fig. 5. We could calculate the ratio if the CMS collaboration reports the multiplicity distributions \(P(n)\)[2], as this would allow us to understand the ensemble property of the BEC through the multiplicity distribution. It is worth noting that the ATLAS collaboration has already observed the multiplicity distributions \(P(n)\) and BEC at 13 TeV [16, 17, 18].
Figure 4: Diagrams for the third-order BEC. The matrix indicates the exchange of identical pions.
Figure 5: Prediction of upper limit of the \(3\pi\) BEC at 13 TeV by means of Eq. (6) with Eqs. (2)-(4).
In the near future, we may be able to further test the two-component model when the CMS collaboration analyzes the 3-\(\pi\) BEC. If we observe the same interaction ranges as in Fig. 2, we could conclude that the two-component model is a viable approach.
## 4 Concluding remarks
* Our analysis of CMS BEC at 13 TeV using the \(\tau\)-model with Eq. (1) confirms the applicability of this model. This is evidenced by the values of \(\chi^{2}\) in Table 1.
* As portrayed in Table 1, the interaction ranges \(r\) in the Levy-type expression \(e^{-(Qr)^{\alpha_{\tau}}}\) increase as the range of the interval \(N_{\rm trk}^{\rm offline}\) increases. However, it appears that the interaction ranges from 30 to 50 fm are large in \(pp\) collisions at 13 TeV.
* To gain a better understanding of the results obtained from the \(\tau\)-model, we have analyzed the BEC using the \(\tau\)-model with Eq. (4). This has led to improved estimations, as shown in Appendix A.
* We look forward to future analyses by the CMS collaboration of the multiplicity distributions and the third-order BEC at 13 TeV
Hereafter, we summarize the results of the two-component model using Eqs. (2)-(4).
* To investigate the remarks mentioned in C2) above using the two-component model, we utilized Eqs. (2)-(4). Our results are presented in Table 2. The large interaction ranges are approximately 4 fm, and they appear to be reasonable.
* Furthermore, to test the availability of the two-component model, we calculated the 3-pion BEC by making use of the estimated values and diagrams presented in Fig. 4. Interestingly, as \(N_{\rm trk}^{\rm offline}\) increases, the 3-pion BEC rapidly decreases, due to the changes in the interaction range \(R_{1}\) (1 fm to 4 fm). Moreover, the intercepts at \(Q=0.0\) GeV are about 3.0, providing the equal weight.
* To investigate the role of the \({\rm LEC_{(Gauss)}}\), i.e., Eq. (4), we reanalyzed the BEC at 0.9 and 7 TeV, with the results presented in Appendix B. The estimated \(\chi^{2}\) values became smaller than that of \({\rm LRC_{(linear)}}\)[7].
* As portrayed in Table 2, the BEC in the intervals \(0\leq N_{\rm trk}^{\rm offline}\leq 4\) and \(10\leq N_{\rm trk}^{\rm offline}\leq 12\) cannot be analyzed with better \(\chi^{2}\) values. A more complicated model may be necessary.
_Acknowledgments._ One of the authors (M.B.) would like to thank his colleagues at the Department of Physics, Shinshu University.
## Appendix A Analysis of BEC at 13 TeV using the \(\tau\)-model with Eq. (4)
We are interested in the influence of Eq. (4) on the \(\tau\)-model. To investigate this, we reanalyzed the BEC using the following formula
\[E^{(2+:2-)}/N_{\rm BG}=[1+\lambda\cos((r_{0}Q)^{2}+\tan(\alpha_{\tau}\pi/4)( Qr)^{\alpha})e^{-(Qr)^{\alpha_{\tau}}}]\cdot{\rm LRC_{(Gauss)}} \tag{7}\]
Our findings are presented in Fig. 6 and Table 3. It can be seen that the interaction range \(r\) values are smaller than 10 fm. Compare ours with those in Table 1.
As illustrated in Fig. 7, the LRC for \(0\leq N_{\rm trk}^{\rm offline}\leq 4\) appears to be singular. The LRC\({}_{\rm(Gauss)}\) is not favor with Eq. (1) in the \(\tau\)-model.
## Appendix B Reanalysis of CMS BEC at 0.9 and 7 TeV [2] by LRC, expressed by Eq. (4)
We examined the changes in the values of \(\chi^{2}\) when LRC\({}_{\rm(linear)}\) was replaced with Eq. (4) in the reanalysis of BEC at 0.9 and 7 TeV [2]. Our new results obtained using Eq. (4) are presented in Fig. 8 and Table 4 and compared with those obtained elsewhere [7], where the linear form for the LRC \(=C(1+\delta Q)\) was used. These results are also shown in Table 4. We show the LRCs in Fig. 9.
It can be said that the Gaussian distribution of the LRC in the two-component model is better than that of the linear form because the LRC\({}_{\rm(Gauss)}\) converges to 1.0 in the region \(Q\geq 2.0\) GeV.
|
2302.00136 | Learning Topology-Preserving Data Representations | We propose a method for learning topology-preserving data representations
(dimensionality reduction). The method aims to provide topological similarity
between the data manifold and its latent representation via enforcing the
similarity in topological features (clusters, loops, 2D voids, etc.) and their
localization. The core of the method is the minimization of the Representation
Topology Divergence (RTD) between original high-dimensional data and
low-dimensional representation in latent space. RTD minimization provides
closeness in topological features with strong theoretical guarantees. We
develop a scheme for RTD differentiation and apply it as a loss term for the
autoencoder. The proposed method "RTD-AE" better preserves the global structure
and topology of the data manifold than state-of-the-art competitors as measured
by linear correlation, triplet distance ranking accuracy, and Wasserstein
distance between persistence barcodes. | Ilya Trofimov, Daniil Cherniavskii, Eduard Tulchinskii, Nikita Balabin, Evgeny Burnaev, Serguei Barannikov | 2023-01-31T22:55:04Z | http://arxiv.org/abs/2302.00136v2 | # Learning Topology-Preserving Data Representations
###### Abstract
We propose a method for learning topology-preserving data representations (dimensionality reduction). The method aims to provide topological similarity between the data manifold and its latent representation via enforcing the similarity in topological features (clusters, loops, 2D voids, etc.) and their localization. The core of the method is the minimization of the Representation Topology Divergence (RTD) between original high-dimensional data and low-dimensional representation in latent space. RTD minimization provides closeness in topological features with strong theoretical guarantees. We develop a scheme for RTD differentiation and apply it as a loss term for the autoencoder. The proposed method "RTD-AE" better preserves the global structure and topology of the data manifold than state-of-the-art competitors as measured by linear correlation, triplet distance ranking accuracy, and Wasserstein distance between persistence barcodes.
## 1 Introduction
Dimensionality reduction is a useful tool for data visualization, preprocessing, and exploratory data analysis. Clearly, immersion of high-dimensional data into 2D or 3D space is impossible without distortions which vary for popular methods. Dimensionality reduction methods can be broadly classified into global and local methods. Classical global methods (PCA, MDS) tend to preserve the global structure of a manifold. However, in many practical applications, produced visualizations are non-informative since they don't capture complex non-linear structures. Local methods (UMAP (McInnes et al., 2018), PaCMAP (Wang et al., 2021), t-SNE (Van der Maaten & Hinton, 2008), Laplacian Eigenmaps (Belkin & Niyogi, 2001), ISOMAP (Tenenbaum et al., 2000)) focus on preserving neighborhood data and local structure with the cost of sacrificing the global structure. The most popular methods like t-SNE and UMAP are a good choice for inferring cluster structures but often fail to describe correctly the data manifold's topology. t-SNE and UMAP have hyperparameters influencing representations neighborhood size taken into account. Different values of hyperparameters lead to significantly different visualizations and neither of them is the "canonical" one that correctly represents high-dimensional data.
We take a different perspective on dimensionality reduction. We propose the approach based on _Topological Data Analysis (TDA)_. Topological Data Analysis (Barannikov, 1994; Zomorodian, 2001; Chazal & Michel, 2017) is a field devoted to the numerical description of multi-scale topological properties of data distributions by analyzing point clouds sampled from them. TDA methods naturally capture properties of data manifolds on multiple distance scales and are arguably a good trade-off between local and global approaches.
The state-of-the-art TDA approach of this kind is TopoAE (Moor et al., 2020). However, it has several weaknesses: 1) the loss term is not continuous 2) the nullity of the loss term is only necessary but not a sufficient condition for the coincidence of topology, as measured by persistence barcodes, see more details in Appendix J.
In our paper, we suggest using the Representation Topology Divergence (RTD) (Barannikov et al., 2022) to produce topology-aware dimensionality reduction. RTD measures the topological discrepancy between two point clouds with one-to-one correspondence between clouds and enjoys nice theoretical properties (Section 3.2). The major obstacle to incorporate RTD into deep learning is its differentiation. There exist approaches to the differentiation of barcodes, generic barcodes-based functions with respect to deformations of filtration (Carriere et al., 2021) and to TDA differentiation in special cases (Hofer et al., 2019; Poulenard et al., 2018).
In this paper, we make the following contributions:
1. We develop an approach for RTD differentiation. Topological metrics are difficult to differentiate; the differentiability of RTD and its implementation on GPU is a valuable step forward in the TDA context which opens novel possibilities in topological optimizations;
2. We propose a new method for topology-aware dimensionality reduction: an autoencoder enhanced with the differentiable RTD loss: "RTD-AE". Minimization of RTD loss between real and latent spaces forces closeness in topological features and their localization with strong theoretical guarantees;
3. By doing computational experiments, we show that the proposed RTD-AE outperforms state-of-the-art methods of dimensionality reduction and the vanilla autoencoder in terms of preserving the global structure and topology of a data manifold; we measure it by the linear correlation, the triplet distance ranking accuracy, Wasserstein distance between persistence barcodes, and RTD. In some cases, the proposed RTD-AE produces more faithful and visually appealing low-dimensional embeddings than state-of-the-art algorithms. We release the RTD-AE source code. 1 Footnote 1: github.com/danchern97/RTD_AE
## 2 Related work
Various dimensionality reduction methods have been proposed to obtain 2D/3D visualization of high-dimensional data (Tenenbaum et al., 2000; Belkin and Niyogi, 2001; Van der Maaten and Hinton, 2008; McInnes et al., 2018). Natural science researchers often use dimensionality reduction methods for exploratory data analysis or even to focus further experiments (Becht et al., 2019; Kobak and Berens, 2019; Karlov et al., 2019; Andronov et al., 2021; Szubert et al., 2019). The main problem with these methods is inevitable distortions (Chari et al., 2021; Batson et al., 2021; Wang et al., 2021) and incoherent results for different hyperparameters. These distortions can largely affect global representation structure such as inter-cluster relationships and pairwise distances. As the interpretation of these quantities in some domain such as physics or biology can lead to incorrect conclusions, it is of high importance to preserve them as much as possible. UMAP and t-SNE visualizations are frequently sporadic and cannot be considered as "canonical" representation of high-dimensional data. An often overlooked issue is the initialization which significantly contributes to the performance of dimensionality reduction methods (Kobak and Linderman, 2021; Wang et al., 2021). Damrich and Hamprecht (2021) revealed that the UMAP's true loss function is different from the purported from its theory because of negative sampling. There is a number of works that try to tackle the distortion problem and preserve as much inter-data relationships as possible. Authors of PHATE (Moon et al., 2019) and ivis (Szubert et al., 2019) claim that their methods are able to capture local as well as global features, but provide no theoretical guarantees for this. (Wagner et al., 2021)
Figure 1: Dimensionality reduction (3D \(\rightarrow\) 2D) on the “Mammoth” dataset. The proposed RTD-AE method better captures both global and local structure.
propose DIPOLE, an approach to dimensionality reduction combining techniques of metric geometry and distributed persistent homology.
From a broader view, deep representation learning is also dedicated to obtaining low-dimensional representation of data. Autoencoder (Hinton & Salakhutdinov, 2006) and Variational Autoencoder (Kingma & Welling, 2013) are mostly used to learn representations of objects useful for solving downstream tasks or data generation. They are not designed for data visualization and fail to preserve simultaneously local and global structure on 2D/3D spaces. Though, their parametric nature makes them scalable and applicable to large datasets, which is why they are used in methods such as parametric UMAP (Sainburg et al., 2021) and vis (Szubert et al., 2019) and ours.
Moor et al. (2020) proposed TopoAE, including an additional loss for the autoencoder to preserve topological structures of the input space in latent representations. The topological similarity is achieved by retaining similarity in the multi-scale connectivity information. Our approach has a stronger theoretical foundation and outperforms TopoAE in computational experiments.
An approach for differentiation of persistent homology-based functions was proposed by Carriere et al. (2021). Leygonie et al. (2021) systematizes different approaches to regularisation of persistence diagrams function and defines notions of differentiability for maps to and from the space of persistence barcodes. Luo et al. (2021) proposed a topology-preserving dimensionality reduction method based on graph autoencoder. Kim et al. (2020) proposed a differentiable topological layer for general deep learning models based on persistence landscapes.
## 3 Preliminaries
### Topological data analysis, persistent homology
Topology is often considered to describe the "shape of data", that is, multi-scale properties of the datasets. Topological information was generally recognized to be important for various data analysis problems. In the perspective of the commonly assumed manifold hypothesis (Goodfellow et al., 2016), datasets are concentrated near low-dimensional manifolds located in high-dimensional ambient spaces. The standard direction is to study topological features of the underlying manifold. The common approach is to cover the manifold via simplices. Given the threshold \(\alpha\), we take sets of the points from the dataset \(X\) which are pairwise closer than \(\alpha\). The family of such sets is called the Vietoris-Rips simplicial complex. For further convenience, we introduce the fully-connected weighted graph \(\mathcal{G}\) whose vertices are the points from \(X\) and whose edges have weights given by the distances between the points. Then, the Vietoris-Rips simplicial complex is defined as:
\[\text{VR}_{\alpha}(\mathcal{G})=\left\{\{i_{0},\ldots,i_{k}\},i_{m}\in\text{ Vert}(\mathcal{G})\mid m_{i,j}\leq\alpha\right\},\]
where \(m_{i,j}\) is the distance between points, \(\text{Vert}(\mathcal{G})=\{1,\ldots,|X|\}\) is the vertices set of the graph \(\mathcal{G}\).
For each \(\text{VR}_{\alpha}(\mathcal{G})\), we define the vector space \(C_{k}\), which consists of formal linear combinations of all \(k\)-dimensional simplices from \(\text{VR}_{\alpha}(\mathcal{G})\) with modulo 2 arithmetic. The boundary operator \(\partial_{k}:C_{k}\to C_{k-1}\) maps every simplex to the sum of its facets. One can show that \(\partial_{k}\circ\partial_{k-1}=0\) and the chain complex can be created:
\[\ldots\to C_{k+1}\stackrel{{\partial_{k+1}}}{{\to}}C_{k} \stackrel{{\partial_{k}}}{{\to}}C_{k-1}\to\ldots.\]
The quotient vector space \(H_{k}=ker(\partial_{k})/im(\partial_{k+1})\) is called the \(k\)-th homology group, elements of \(H_{k}\) are called homology classes. The dimension \(\beta_{k}=dim(H_{k})\) is called the \(k\)-th Betti number and it approximates the number of basic topological features of the manifold represented by the point cloud \(X\).
The immediate problem here is the selection of appropriate \(\alpha\) which is not known beforehand. The standard solution is to analyze all \(\alpha>0\). Obviously, if \(\alpha_{1}\leq\alpha_{2}\leq\ldots\leq\alpha_{m}\), then \(\text{VR}_{\alpha_{j}}(\mathcal{G})\subseteq\text{VR}_{\alpha_{2}}(\mathcal{ G})\subseteq\ldots\subseteq\text{VR}_{\alpha_{m}}(\mathcal{G})\); the nested sequence is called the filtration. The evolution of cycles across the nested family of simplicial complexes \(S_{\alpha_{i}}\) is canonically decomposed into "birth" and "death" of basic topological features, so that a basic feature \(c\) appears in \(H_{k}(S_{\alpha})\) at a specific threshold \(\alpha_{c}\) and disappears at a specific threshold \(\beta_{c}\), \(\beta_{c}-\alpha_{c}\) describes the "lifespan" or persistence of the homology class. The set of the corresponding intervals \([\alpha_{c},\beta_{c}]\) for the basic homology classes from \(H_{k}\) is called the _persistence barcode_; the whole theory is dubbed the _persistent homology_(Chazal & Michel, 2017; Barannikov, 1994; Zomorodian, 2001).
### Representation Topology Divergence (RTD)
The classic persistent homology is dedicated to the analysis of a single point cloud \(X\). Recently, Representation Topology Divergence (RTD) (Barannikov et al., 2022) was proposed to measure the dissimilarity in the multi-scale topology between two point clouds \(X,\tilde{X}\) of equal size \(N\) with a one-to-one correspondence between clouds. Let \(\mathcal{G}^{w}\), \(\mathcal{G}^{\bar{w}}\) be graphs with weights on edges equal to pairwise distances of \(X,\tilde{X}\). To provide the comparison, the auxiliary graph \(\mathcal{G}^{w,\bar{w}}\) with doubled set of vertices and edge weights matrix \(m(w,\tilde{w})\), see details in Appendix B, is created. The persistence barcode of the graph \(\mathcal{G}^{w,\bar{w}}\) is called the _R-Cross-Barcode_ and it tracks the differences in the multi-scale topology of the two point clouds by comparing their \(\alpha\)-neighborhood graphs for all \(\alpha\).
Here we give a simple example of an R-Cross-Barcode, see also (Cherniavskii et al., 2022). Suppose we have two point clouds \(A\) and \(B\), of seven points each, with distances between points as shown in the top row of Figure 2. Consider the R-Cross-Barcode\({}_{1}\)(A, B), it consists of 4 intervals (the bottom row of the figure). The 4 intervals describe the topological discrepancies between connected components of \(\alpha\)-neighborhood graphs of \(A\) and \(B\).
An interval is opened, i.e. a topological discrepancy appears, at threshold \(\alpha=\tilde{w}_{uv}^{B}\) when in the union of \(\alpha\)-neighborhood graph of \(A\) and \(B\), two vertex sets \(C_{1}\) and \(C_{2}\) disjoint at smaller thresholds, are joined into one connected component by the edge \((uv)\) from \(B\). This interval is closed at threshold \(\alpha=w_{uv^{\prime}}^{A}\), when the two vertex sets \(C_{1}\) and \(C_{2}\) are joined into one connected component in the \(\alpha\)-neighborhood graph of \(A\).
For example, a discrepancy appears at the threshold \(\alpha=0.53\) when the vertex sets \(\{4\}\) and \(\{3,6,7\}\) are joined into one connected component in the union of neighborhood graphs of \(A\) and \(B\) by the edge \((4,7)\). We identify the "death" of this R-Cross-Barcode feature at \(\alpha=0.57\), when these two sets are joined into one connected component in the neighborhood graph of cloud A (via the edge \((4,7)\) in Figure 2 becoming grey).
By definition, \(\text{RTD}_{k}(X,\tilde{X})\) is the sum of intervals' lengths in the _R-Cross-Barcode\({}_{k}(X,\tilde{X})\)_ and measures its closeness to an empty set.
**Proposition 1** (Barannikov et al. (2022)).: _If \(\text{RTD}_{k}(X,\tilde{X})=\text{RTD}_{k}(\tilde{X},X)=0\) for all \(k\geq 1\), then the barcodes of the weighted graphs \(\mathcal{G}^{w}\) and \(\mathcal{G}^{\bar{w}}\) are the same in any degree. Moreover, in this case the topological features are located in the same places: the inclusions \(\text{VR}_{\alpha}(\mathcal{G}^{w})\subseteq\text{VR}_{\alpha}(\mathcal{G}^{ \min(w,\bar{w})})\), \(\text{VR}_{\alpha}(\mathcal{G}^{\bar{w}})\subseteq\text{VR}_{\alpha}(\mathcal{ G}^{\min(w,\bar{w})})\) induce homology isomorphisms for any threshold \(\alpha\)._
The Proposition 1 is a strong basis for topology comparison and optimization. Given a fixed data representation \(X\), how to find \(\tilde{X}\) lying in a different space, and having a topology similar to \(X\), in particular, similar persistence barcodes? Proposition 1 states that it is sufficient to minimize
Figure 2: A graphical representation of an R-Cross-Barcode\({}_{1}(A,B)\) for the point clouds \(A\) and \(B\). The pairwise distance matrices for \(A\) and \(B\) are shown in the top raw. Edges present in the \(\alpha\)-neighborhood graphs for \(B\) but not for \(A\) are colored in red. Edges present in the \(\alpha\)-neighborhood graph for \(A\) are colored in grey. The timeline for appearance-disappearance of topological features distinguishing the two graphs is shown. The appearance-disappearance process is illustrated by the underlying bars, connecting the corresponding thresholds.
\(\sum_{i\geq 1}\left(\text{RTD}_{i}(X,\tilde{X})+\text{RTD}_{i}(\tilde{X},X)\right)\). In most of our experiments we minimized \(\text{RTD}_{1}(X,\tilde{X})+\text{RTD}_{1}(\tilde{X},X)\). \(\text{RTD}_{1}\) can be calculated faster than \(\text{RTD}_{2+}\), also \(\text{RTD}_{2+}\) are often close to zero. To simplify notation, we denote \(\text{RTD}(X,\tilde{X}):=\nicefrac{{1}}{{2}}(\text{RTD}_{1}(X,\tilde{X})+ \text{RTD}_{1}(\tilde{X},X))\).
**Comparison with TopoAE loss**. TopoAE (Moor et al., 2020) is the state-of-the-art algorithm for topology-preserving dimensionality reduction. The TopoAE topological loss is based on comparison of minimum spanning trees in \(X\) and \(\tilde{X}\) spaces. However, it has several weak spots. First, when the TopoAE loss is zero there is no guarantee that persistence barcodes of \(X\) and \(\tilde{X}\) coincide. Second, the TopoAE loss can be discontinuous in rather standard situations, see Appendix J. At the same time, RTD loss is continuous, and its nullity guarantees the coincidence of persistence barcodes of \(X\) and \(\tilde{X}\). The continuity of the RTD loss follows from the stability of the R-Cross-Barcode\({}_{k}\) (Proposition 2).
**Proposition 2**.: _(a) For any quadruple of edge weights sets \(w_{ij}\), \(\tilde{w}_{ij}\), \(v_{ij}\), \(\tilde{v}_{ij}\) on \(\mathcal{G}\):_
\(d_{B}(\text{R-Cross-Barcode}_{k}(w,\tilde{w}),\text{R-Cross-Barcode}_{k}(v, \tilde{v}))\leq\max(\max_{ij}\lvert v_{ij}-w_{ij}\rvert,\max_{ij}\lvert\tilde {v}_{ij}-\tilde{w}_{ij}\rvert).\)__
_(b) For any pair of edge weights sets \(w_{ij}\), \(\tilde{w}_{ij}\) on \(\mathcal{G}\):_
\[\lVert\text{R-Cross-Barcode}_{k}(w,\tilde{w})\rVert_{B}\leq\max_{ij}\lvert w_{ ij}-\tilde{w}_{ij}\rvert.\]
_(c) The expectation for the bottleneck distance between R-Cross-Barcode\({}_{k}(w,\tilde{w})\) and R-Cross-Barcode\({}_{k}(w^{\prime},\tilde{w})\), where \(w_{ij}=w(x_{i},x_{j})\), \(w^{\prime}_{ij}=w^{\prime}(x_{i},x_{j})\), \(\tilde{w}_{ij}=\tilde{w}(x_{i},x_{j})\), \(w,w^{\prime},\tilde{w}\) is a triple of metrics on a measure space \((\mathcal{X},\mu)\), and \(\tilde{X}=\{x_{1},\ldots,x_{n}\}\), \(x_{i}\in\mathcal{X}\) is a sample from \((\mathcal{X},\mu)\), is upper bounded by Gromov-Wasserstein distance between \(w\) and \(w^{\prime}\):_
\[\int_{\mathcal{X}\times\ldots\times\mathcal{X}}d_{B}(\text{R-Cross-Barcode}_{k }(w,\tilde{w}),\text{R-Cross-Barcode}_{k}(w^{\prime},\tilde{w}))d\mu^{\otimes n }\leq n\,GW(w,w^{\prime}).\]
_(d) The expectation for the bottleneck norm of R-Cross-Barcode\({}_{k}(w,\tilde{w})\) for two weighted graphs with edge weights \(w_{ij}=w(x_{i},x_{j})\), \(\tilde{w}_{ij}=\tilde{w}(x_{i},x_{j})\), where \(w,\tilde{w}\) is a pair of metrics on a measure space \((\mathcal{X},\mu)\), and \(X=\{x_{1},\ldots,x_{n}\}\), \(x_{i}\in\mathcal{X}\) is a sample from \((\mathcal{X},\mu)\), is upper bounded by Gromov-Wasserstein distance between \(w\) and \(\tilde{w}\):_
\[\int_{\mathcal{X}\times\ldots\times\mathcal{X}}\lVert\text{R-Cross-Barcode}_{k }(w,\tilde{w})\rVert_{B}d\mu^{\otimes n}\leq n\,GW(w,\tilde{w}).\]
The proofs are given in Appendix K.
## 4 Method
### Differentiation of RTD
We propose to use RTD as a loss in neural networks. Here we describe our approach to RTD differentiation. Denote by \(\Sigma_{k}\) the set of all \(k-\)simplices in the Vietoris-Rips complex of the graph \(\hat{\mathcal{G}}^{w,\tilde{w}}\), and by \(\mathcal{T}_{k}\) the set of all intervals in the _R-Cross-Barcode\({}_{k}(X,\tilde{X})\)_. Fix (an arbitrary) strict order on \(\mathcal{T}_{k}\). There exists a function \(f_{k}:\ \cup_{(b_{i},d_{i})\in\mathcal{T}_{k}}\{b_{i},d_{i}\}\to\Sigma_{k}\) that maps \(b_{i}\) (or \(d_{i}\)) to a simplex \(\sigma\) whose appearance leads to "birth" (or "death") of the corresponding homological class. Let
\[m_{\sigma}=\max_{i,j\in\sigma}m_{i,j}\]
denote the function of \(m_{ij}\) equal to the filtration value at which the simplex \(\sigma\) joins the filtration. Since \(\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial d_{i}}=-\frac{\partial \text{ RTD}_{k}(X,\tilde{X})}{\partial b_{i}}=1\), we obtain the following equation for the subgradient
\[\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial m_{\sigma}}=\sum_{i\in \mathcal{T}_{k}}\mathbb{I}\{f_{k}(d_{i})=\sigma\}-\sum_{i\in\mathcal{T}_{k}} \mathbb{I}\{f_{k}(b_{i})=\sigma\}.\]
Here, for any \(\sigma\) no more than one term has non-zero indicator. Then
\[\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial m_{i,j}}=\sum_{\sigma\in \Sigma_{k}}\frac{\partial\text{ RTD}_{k}(X,\tilde{X})}{\partial m_{\sigma}}\frac{\partial m_{\sigma}}{ \partial m_{i,j}}\]
Figure 3: RTD Autoencoder
The only thing that is left is to obtain subgradients of \(\text{RTD}(X,\tilde{X})\) by points from \(X\) and \(\tilde{X}\). Consider (an arbitrary) element \(m_{i,j}\) of matrix \(m\). There are 4 possible scenarios:
1. \(i,j\leq N\), in other words \(m_{i,j}\) is from the upper-left quadrant of \(m\). Its length is constant and thus \(\forall l:\frac{\partial m_{i,j}}{\partial X_{i}}=\frac{\partial m_{i,j}}{ \partial X_{l}}=0\).
2. \(i\leq N<j\), in other words \(m_{i,j}\) is from the upper-right quadrant of \(m\). Its length is computed as Euclidean distance and thus \(\frac{\partial m_{i,j}}{\partial X_{i}}=\frac{X_{i}-X_{i-N}}{||X_{i}-X_{j-N}|| _{2}}\) (similar for \(X_{N-j}\)).
3. \(j\leq N<i\), similar to the previous case.
4. \(N<i,j\), in other words \(m_{i,j}\) is from the bottom-right quadrant of \(m\). Here we have subgradients like \[\frac{\partial m_{i,j}}{\partial X_{i-N}}=\frac{X_{i}-X_{j-N}}{||X_{i}-X_{j-N} ||_{2}}\mathbb{I}\{w_{i-N,j-N}<\tilde{w}_{i-N,j-N}\}\] Similar for \(X_{j-N},\tilde{X}_{i-N}\) and \(\tilde{X}_{j-N}\).
Subgradients \(\frac{\partial\text{RTD}(X,\tilde{X})}{\partial X_{i}}\) and \(\frac{\partial\text{RTD}(X,\tilde{X})}{\partial\tilde{X}_{i}}\) can be derived from the beforementioned using the chain rule and the formula of full (sub)gradient. Now we are able to minimize \(\text{RTD}(X,\tilde{X})\) by methods of (sub)gradient optimization. We discuss some possible tricks for improving RTD differentiation in Appendix I.
### RTD Autoencoder
Given the data \(X=\{x_{i}\}_{i=1}^{n}\), \(x_{i}\in\mathbb{R}^{d}\), in high-dimensional space, our goal is to find the representation in low-dimensional space \(Z=\{z_{i}\}\), \(z_{i}\in\mathbb{R}^{p}\). For the visualization purposes, \(p=2,3\). Our idea is to find a representation \(Z\) which preserves _persistence barcodes_, that is, multi-scale topological properties of the point clouds, as much as possible. The straightforward approach is to solve \(\min_{Z}\text{RTD}(X,Z)\), where the optimization is performed over \(n\) vectors \(z_{i}\in\mathbb{R}^{p}\), in the flavor similar to UMAP and t-SNE. This approach is workable albeit very time-consuming and could be applied only to small datasets, see Appendix F. A practical solution is to learn representations via the encoder network \(E(w,x):X\to Z\), see Figure 3.
**Algorithm**. Initially, we train the autoencoder for \(E_{1}\) epochs with the reconstruction loss \(\frac{1}{2}||X-X_{rec}||^{2}\) only. Then, we train for \(E_{2}\) epochs with the loss \(\frac{1}{2}||X-X_{rec}||^{2}+\text{RTD}(X,Z)\). Both losses are calculated on mini-batches. The two-step procedure speedups training since calculating \(\text{RTD}(X,Z)\) for the untrained network takes much time.
Figure 4: Results on dimensionality reduction to 3D-space
## 5 Experiments
In computational experiments, we perform dimensionality reduction to high-dimensional and 2D/3D space for ease of visualization. We compare original data with latent representations by (1) linear correlation of pairwise distances, (2) Wasserstein distance (W.D.) between \(H_{0}\) persistence barcodes (Chazal & Michel, 2017), (3) triplet distance ranking accuracy (Wang et al., 2021) (4) RTD. All of the quality measures are tailored to evaluate how the manifold's global structure and topology are preserved. We note that RTD, as a quality measure, provides a more precise comparison of topology than the W.D. between \(H_{0}\) persistence barcodes. First, RTD takes into account the localization of topological features, while W.D. does not. Second, W.D. is invariant to permutations of points, but we are interested in comparison between original data and latent representation where natural one-to-one correspondence holds.
We compare the proposed RTD-AE with t-SNE (Van der Maaten & Hinton, 2008), UMAP (McInnes et al., 2018), TopoAE (Moor et al., 2020), vanilla autoencoder (AE), PHATE (Moon et al., 2019), Ivis (Szubert & Drozdov, 2019), PacMAP (Wang et al., 2021). The complete description of all the used datasets can be found in Appendix L. See hyperparameters in Appendix H.
### Synthetic datasets
We start with the synthetic dataset "Spheres": eleven 100D spheres in the 101D space, any two of those do not intersect and one of the spheres contains all other inside. For the visualization, we perform dimensionality reduction to 3D space. Figure 4 shows the results: RTD-AE is the best one preserving the nestedness for the "Spheres" dataset. Also, RTD-AE outperforms other methods by quality measures, see Table 1. We were unable to run MDS on "Spheres" dataset because it was too large for that method. See more results in Appendix M.
### Real world datasets
We performed experiments with a number of real-world datasets: MNIST (LeCun et al., 1998), F-MNIST (Xiao et al., 2017), COIL-20 (Nene et al., 1996), scRNA mice (Yuan et al., 2017), scRNA melanoma (Tirosh et al., 2016) with latent dimension of 16 and 2, see Tables 2, 5. The choice of scRNA datasets was motivated by the increased importance of dimensionality reduction methods in natural sciences, as was previously mentioned. RTD-AE is consistently better than competitors; moreover, the gap in metrics for the latent dimension 16 is larger than such for the latent dimension 2 (see Appendix D). 2 For the latent dimension 2, RTD-AE is the first or the second one among the methods by the quality measures (see Table 5, Figure 7 in Appendix D). We conclude that the proposed RTD-AE does a good job in preserving global structure of data manifolds.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{4}{c}{Quality measure} \\ \cline{3-6} Dataset & Method & L. C. & W. D. \(H_{0}\) & T. A. & RTD \\ \hline Spheres 3D & t-SNE & 0.087 & 47.89 \(\pm\) 2.59 & 0.206 \(\pm\) 0.01 & 37.32 \(\pm\) 1.44 \\ & UMAP & 0.049 & 48.31 \(\pm\) 1.83 & 0.313 \(\pm\) 0.03 & 44.70 \(\pm\) 1.47 \\ & PaCMAP & 0.394 & 46.48 \(\pm\) 1.61 & 0.156 \(\pm\) 0.02 & 45.88 \(\pm\) 1.51 \\ & PHATE & 0.302 & 48.78 \(\pm\) 1.65 & 0.207 \(\pm\) 0.02 & 44.05 \(\pm\) 1.42 \\ & PCA & 0.155 & 47.15 \(\pm\) 1.89 & 0.174 \(\pm\) 0.02 & 38.96 \(\pm\) 1.25 \\ & MDS & N.A. & N.A. & N.A. & N.A. \\ & Ivis & 0.257 & 46.32 \(\pm\) 2.04 & 0.130 \(\pm\) 0.01 & 41.15 \(\pm\) 1.28 \\ & AE & 0.441 & **45.07 \(\pm\) 2.27** & 0.333 \(\pm\) 0.02 & 39.64 \(\pm\) 1.45 \\ & TopoAE & 0.424 & 45.89 \(\pm\) 2.35 & 0.274 \(\pm\) 0.02 & 38.49 \(\pm\) 1.59 \\ & RTD-AE & **0.633** & **45.02 \(\pm\) 2.69** & **0.346 \(\pm\) 0.02** & **35.80 \(\pm\) 1.63** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quality of data manifold global structure preservation at projection from 101D into 3D space.
For the "Mammoth" (Coenen and Pearce, 2019b) dataset (Figure 1) we did dimensionality reduction 3D \(\rightarrow\) 2D. Besides good quality measures, RTD-AE produced an appealing 2D visualization: both large-scale (shape) and low-scale (chest bones, toes, tussk) features are preserved.
### Analysis of distortions
Next, to study distortions produced by various dimensionality reduction methods we learn transformation from 2D to 2D space, see Figure 5. Here, we observe that RTD-AE in general recovers the global structure for all of the datasets. RTD-AE typically does not suffer from the squeezing (or bottleneck) issue, unlike AE, which is noticeable in "Random", "3 Clusters" and "Circle". Whereas t-SNE and UMAP struggle to preserve cluster densities and intercluster distances, RTD-AE manages to do that in every case. It does not cluster random points together, like t-SNE. Finally, the overall shape of representations produced by RTD-AE is consistent, it does not tear apart close points, which is something UMAP does in some cases, as shown in the "Circle" dataset. The metrics, presented in the Table 6 in Appendix E, also confirm the statements above. RTD-AE has typically higher pairwise distances linear correlation and triplet accuracy, which accounts for good multi-scale properties, while having a lower Wasserstein distance between persistence barcodes.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{4}{c}{Quality measure} \\ \cline{3-6} Dataset & Method & L. C. & W. D. \(H_{0}\) & T. A. & RTD \\ \hline F-MNIST & UMAP & 0.602 & 592.0 \(\pm\) 3.9 & 0.741 \(\pm\) 0.018 & 12.31 \(\pm\) 0.44 \\ & PaCMAP & 0.600 & 585.9 \(\pm\) 3.2 & 0.741 \(\pm\) 0.013 & 12.72 \(\pm\) 0.48 \\ & Ivis & 0.582 & 552.6 \(\pm\) 3.5 & 0.718 \(\pm\) 0.014 & 10.76 \(\pm\) 0.30 \\ & PHATE & 0.603 & 576.4 \(\pm\) 4.4 & 0.756 \(\pm\) 0.016 & 10.72 \(\pm\) 0.15 \\ & AE & 0.879 & 320.5 \(\pm\) 1.9 & 0.850 \(\pm\) 0.004 & 5.52 \(\pm\) 0.17 \\ & TopoAE & 0.905 & 190.7 \(\pm\) 1.2 & 0.867 \(\pm\) 0.006 & 3.69 \(\pm\) 0.24 \\ & RTD-AE & **0.960** & **181.2 \(\pm\) 0.8** & **0.907 \(\pm\) 0.004** & **3.01 \(\pm\) 0.13** \\ \hline MNIST & UMAP & 0.427 & 879.1 \(\pm\) 5.6 & 0.625 \(\pm\) 0.016 & 17.62 \(\pm\) 0.73 \\ & PaCMAP & 0.410 & 887.5 \(\pm\) 6.1 & 0.644 \(\pm\) 0.012 & 20.07 \(\pm\) 0.70 \\ & Ivis & 0.423 & 712.6 \(\pm\) 5.0 & 0.668 \(\pm\) 0.013 & 12.40 \(\pm\) 0.32 \\ & PHATE & 0.358 & 819.5 \(\pm\) 4.0 & 0.626 \(\pm\) 0.018 & 15.01 \(\pm\) 0.25 \\ & AE & 0.773 & 391.0 \(\pm\) 2.9 & 0.771 \(\pm\) 0.010 & 7.22 \(\pm\) 0.14 \\ & TopoAE & 0.801 & 367.5 \(\pm\) 1.9 & 0.796 \(\pm\) 0.014 & 5.84 \(\pm\) 0.19 \\ & RTD-AE & **0.879** & **329.6 \(\pm\) 2.6** & **0.833 \(\pm\) 0.006** & **4.15 \(\pm\) 0.18** \\ \hline COIL-20 & UMAP & 0.301 & 274.7 \(\pm\) 0.0 & 0.574 \(\pm\) 0.011 & 15.99 \(\pm\) 0.52 \\ & PaCMAP & 0.230 & 273.5 \(\pm\) 0.0 & 0.548 \(\pm\) 0.012 & 15.18 \(\pm\) 0.35 \\ & Ivis & N.A. & N.A. & N.A. \\ & PHATE & 0.396 & 250.7 \(\pm\) 0.000 & 0.575 \(\pm\) 0.014 & 13.76 \(\pm\) 0.78 \\ & AE & 0.834 & 183.6 \(\pm\) 0.0 & 0.809 \(\pm\) 0.008 & 8.35 \(\pm\) 0.15 \\ & TopoAE & 0.910 & 148.0 \(\pm\) 0.0 & 0.822 \(\pm\) 0.020 & 6.90 \(\pm\) 0.19 \\ & RTD-AE & **0.944** & **88.9 \(\pm\) 0.0** & **0.892 \(\pm\) 0.007** & **5.78 \(\pm\) 0.10** \\ \hline scRNA mice & UMAP & 0.560 & 1141.0 \(\pm\) 0.0 & 0.712 \(\pm\) 0.010 & 21.30 \(\pm\) 0.17 \\ & PaCMAP & 0.496 & 1161.3 \(\pm\) 0.0 & 0.674 \(\pm\) 0.016 & 21.89 \(\pm\) 0.13 \\ & Ivis & 0.401 & 1082.6 \(\pm\) 0.0 & 0.636 \(\pm\) 0.007 & 22.56 \(\pm\) 1.13 \\ & PHATE & 0.489 & 1134.6 \(\pm\) 0.0 & 0.722 \(\pm\) 0.013 & 21.34 \(\pm\) 0.32 \\ & AE & 0.710 & 1109.2 \(\pm\) 0.0 & 0.788 \(\pm\) 0.013 & 20.80 \(\pm\) 0.16 \\ & TopoAE & 0.634 & **826.0 \(\pm\) 0.0** & 0.748 \(\pm\) 0.010 & **15.37 \(\pm\) 0.22** \\ & RTD-AE & **0.777** & 932.9 \(\pm\) 0.0 & **0.802 \(\pm\) 0.006** & 17.03 \(\pm\) 0.15 \\ \hline scRNA melanoma & UMAP & 0.474 & 1416.9 \(\pm\) 9.2 & 0.682 \(\pm\) 0.013 & 20.02 \(\pm\) 0.35 \\ & PaCMAP & 0.357 & 1441.8 \(\pm\) 9.1 & 0.681 \(\pm\) 0.014 & 20.53 \(\pm\) 0.36 \\ & Ivis & 0.465 & 1168.0 \(\pm\) 11.4 & 0.653 \(\pm\) 0.016 & 16.31 \(\pm\) 0.28 \\ & PHATE & 0.427 & 1427.5 \(\pm\) 9.1 & 0.687 \(\pm\) 0.018 & 20.18 \(\pm\) 0.41 \\ & AE & 0.458 & 1345.9 \(\pm\) 11.3 & 0.708 \(\pm\) 0.016 & 19.50 \(\pm\) 0.37 \\ & TopoAE & 0.544 & 973.7 \(\pm\) 11.1 & 0.709 \(\pm\) 0.011 & 13.41 \(\pm\) 0.35 \\ & RTD-AE & **0.684** & **769.5 \(\pm\) 11.5** & **0.728 \(\pm\) 0.017** & **10.35 \(\pm\) 0.33** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quality of data manifold global structure preservation at projection into 16D space.
### Limitations and computational complexity
The main source of complexity is RTD computation. For the batch size \(b\), object dimensionality \(d\) and latent dimensionality \(k\), the complexity is \(O(b^{2}(d+k))\) operations since all the pairwise distances should be calculated. The R-Cross-Barcode computation is at worst cubic in the number of simplices involved. However, the computation is often quite fast for batch sizes \(\leq 256\) since the boundary matrix is typically sparse for real datasets. The selection of simplices whose addition leads to "birth" or "death" of the corresponding homological class doesn't take extra time. For RTD calculation and differentiation, we used GPU-optimized software. As calculation relies heavily on the batch size, the training time of RTD-AE ranges from 1.5x the time of the basic autoencoder at batch size 8 to 4-6x the time in case of batch 512. For COIL-20, the it took \(\sim\)10 minutes to train a basic AE and \(\sim\)20 minutes for RTD-AE. Overall, the computation of a R-Cross-Barcode takes a similar time as in the previous step even on datasets of big dimensionality.
### Discussion
Experimental results show that RTD-AE better preserves the data manifold global structure than its competitors. The most interesting comparison is with TopoAE, the state-of-the-art, which uses an alternative topology-preserving loss. The measures of interest for topology comparison are the Wasserstein distances between persistence barcodes. Tables 2, 6, 5 show that RTD-AE is better than TopoAE. RTD minimization has a stronger theoretical foundation than the loss from TopoAE (see Section 3.2).
## 6 Conclusions
In this paper, we have proposed an approach for topology-preserving representation learning (dimensionality reduction). The topological similarity between data points in original and latent spaces is achieved by minimizing the Representation Topology Divergence (RTD) between original data and latent representations. Our approach is theoretically sound: RTD=0 means that persistence barcodes of any degree coincide and the topological features are located in the same places. We proposed how to make RTD differentiable and implemented it as an additional loss to the autoencoder, constructing RTD-autoencoder (RTD-AE). Computational experiments show that the proposed RTD-AE better preserves the global structure of the data manifold (as measured by linear correlation, triplet distance ranking accuracy, Wasserstein distance between persistence barcodes) than popular methods t-SNE and UMAP. Also, we achieve higher topological similarity than the alternative TopoAE method. Of course, the application of RTD loss is not limited to autoencoders and we expect more deep learning applications involving one-to-one correspondence between points. The main limitation is that calculation of persistence barcodes and RTD, in particular, is computationally demanding. We see here another opportunity for further research.
Figure 5: Results on synthetic 2D data. First column: original data. Other columns: results of dimensionality reduction methods.
## Acknowledgements
The work was supported by the Analytical center under the RF Government (subsidy agreement 000000D730321P5Q0002, Grant No. 70-2021-00145 02.11.2021)
## Reproducibility Statement
To provide reproducibility, we release the source code of the proposed RTD-AE, see section 1, for hyperparameters see Appendix H. For other methods, we used either official implementations or implementations from scikit-learn with default hyperparameters. We used public datasets (see Section 5, Appendix L). We generated several synthetic datasets and made the generating code available.
|
2309.12898 | Earthquake-like dynamics in ultrathin magnetic film | We study the motion of a domain wall on an ultrathin magnetic film using the
magneto-optical Kerr effect (MOKE). At tiny magnetic fields, the wall creeps
only via thermal activation over the pinning centers present in the sample. Our
results show that this creep dynamics is highly intermittent and correlated. A
localized instability triggers a cascade, akin to aftershocks following a large
earthquake, where the pinned wall undergoes large reorganizations in a compact
active region for a few seconds. Surprisingly, the size and shape of these
reorganizations display the same scale-free statistics of the depinning
avalanches in agreement with the quenched Kardar-Parisi-Zhang universality
class. | Gianfranco Durin, Vincenzo Maria Schimmenti, Marco Baiesi, Arianna Casiraghi, Alessandro Magni, Liza Herrera-Diez, Dafiné Ravelosona, Laura Foini, Alberto Rosso | 2023-09-22T14:36:55Z | http://arxiv.org/abs/2309.12898v1 | # Earthquake-like dynamics in ultrathin magnetic film
###### Abstract
We study the motion of a domain wall on an ultrathin magnetic film using the magneto-optical Kerr effect (MOKE). At tiny magnetic fields, the wall creeps only via thermal activation over the pinning centers present in the sample. Our results show that this creep dynamics is highly intermittent and correlated. A localized instability triggers a cascade, akin to aftershocks following a large earthquake, where the pinned wall undergoes large reorganizations in a compact active region for a few seconds. Surprisingly, the size and shape of these reorganizations display the same scale-free statistics of the depinning avalanches in agreement with the quenched Kardar-Parisi-Zhang universality class.
pacs: An important class of future spintronic nanoelectronic devices is based on fully controlling magnetic domain walls in ultrathin films [1; 2]. When used as memory devices, for instance, it is fundamental to control their position stability and understand their dynamics under a small perturbation. It is well known that defects naturally present in the nanostructure can pin the domain wall. Consequently, the wall creeps at a finite temperature, with a velocity strongly vanishing with the applied magnetic field. In ultrathin magnetic films, the creep regime holds up to room temperature and well below the depinning field \(H_{\rm dep}\). After an initial transient, the wall moves with a small, steady velocity given by the celebrated _creep formula_:
\[\ln v(H)=\left(\frac{H}{H_{0}}\right)^{-1/4}+\ln v_{0} \tag{1}\]
Here, \(H_{0}\) and \(v_{0}\) are materials and temperature-dependent parameters. The exponent \(1/4\) is instead universal and is the true hallmark of the creep dynamics. It has been first predicted in [3], measured in [4] over several decades of velocity, and then confirmed in many experiments [5; 6]. Despite this success, the nature of the creep dynamics remains controversial. In particular, several hypotheses are made on the length scales involved and on the shape of the wall.
The original derivation of the creep formula assumes that the small magnetic field tilts the minima but leaves the landscape locally at thermal equilibrium. Within this picture, one finds a single length \(L_{\rm opt}\), the scale of the reorganization needed to overcome the optimal energy barriers. Under this assumption, one can estimate that \(L_{\rm opt}\sim H^{-3/4}\) and the corresponding energy barriers grow as \(H^{-1/4}\)[3; 7; 8; 9]. Below the scale \(L_{\rm opt}\), the dynamics is thus purely thermal, characterized by an incoherent back-and-forth motion. Above \(L_{\rm opt}\) instead, the wall never comes back and undergoes a novel slow reorganization of size \(L_{\rm opt}\) in a different location [3].
Further studies, based on functional renormalization group (FRG) [10] and numerical simulations at infinitesimal temperature \(T\to 0^{+}\)[11; 12; 9; 13], have proposed a different scenario. The activation over a size \(L_{\rm opt}\) destabilizes the local energy landscape and reduces the size of the energy barriers. Similarly to what is observed in earthquakes, the jump of size \(L_{\rm opt}\) acts as the mainshock that produces a cascade of aftershocks of smaller size [14; 15; 16; 17]. Hence, the region undergoes a much larger reorganization and mimics the depinning avalanches belonging to the quenched Edwards-Wilkinson (qEW) universality class [18; 19]. The scale-free statistics of these avalanches is valid up to a length \(L_{\rm av}\) much more extensive than \(L_{\rm opt}\) and controlled by the finite values of the temperature and the field.
Interestingly, this scenario has strong connections with the thermal facilitation proposed to justify the dynamical heterogeneity in glass-forming liquids [20; 21]. The mechanism is similar: the slow relaxation time is dominated by localized slow events that nucleate large responses on a much larger scale. However, experimental evidence of these large reorganizations is still lacking.
In this paper, we report the full dynamical evolution of a domain wall using the magneto-optical Kerr effect (MOKE) on an ultrathin magnetic thin film. As it is clear from the movie in [22], the dynamics is intermittent and correlated. Our analysis demonstrates that the correlations are on scales much larger than \(L_{\rm opt}\) and that the
destabilization and reorganization are governed by the depinning critical point, displaying scale-free statistics with exponents in agreement with the quenched Kardar-Parisi-Zhang (qKPZ) universality class.
_Experimental setting._ -- Field-driven domain wall dynamics is investigated in a Ta(5)/CoFeB(1)/MgO(2)/Ta(3) (thickness in nm) thin film with perpendicular magnetic anisotropy (PMA) [22]. This material is typically very soft, exhibiting a depinning field of the order of 10 mT. The low density of pinning defects with respect to other PMA systems, such as Co/Pt and Co/Ni multilayers, makes it a good candidate to study domain wall dynamics [23]. The competition between domain wall elasticity and the local disorder results in a thermally activated creep motion for driving fields up to the depinning field [24]. A magnetic bubble domain is initially nucleated with a \(\sim 30\)\(\mu\)m radius in the pre-saturated film through a short field pulse. The subsequent slow expansion occurs under a small continuous perpendicular applied field. Here we use \(H=0.13,0.14,0.15,0.16\) mT, corresponding to \(<2\,\%\) of \(H_{\text{dep}}\). This ultra-slow creep dynamics is captured through MOKE microscopy. MOKE images with a spatial resolution of 400 nm are acquired every 200 ms until the bubble radius has increased to about 100 \(\mu\)m. Even at the lowest applied field, the bubble domain conserves its circular shape and boundary smoothness upon expansion, indicating weak random pinning. The limitations in the spatial resolution and in the acquisition rate do not allow us to detect the fast dynamics of the domain wall at the nanoscale, but we can resolve the motion of the wall by estimating the time at which each pixel changes its gray level (see section 1 of [22] for a detailed description of the procedure). Remarkably, the set of switched pixels between two consecutive images is always connected in space, and we define it as a single _frame event_.
_Analysis of experimental spatiotemporal patterns._ -- The dynamics of the domain observed frame by frame displays two important features:
* The bubble always expands and never comes back. As shown in Fig. 1 (b), the position of the interface \(R(\theta,t)\) along any direction is a non-decreasing function of time. Moreover, after an initial transient, the velocity of the wall decays to its steady value \(\bar{v}\). However, the local velocity (inset) displays strong intermittency in time.
* The motion presents spatial correlations well above the pixel size. Indeed Fig. 1 (a) shows that each event frame corresponds to a compact spatial region, and events of subsequent frames tend to cluster in space. See the movie in [22] to visualize the full dynamics of the bubble.
These two features support the second scenario for which the initial reorganization of a region of size
Figure 1: (a): The evolution of a portion (in pink) of the wall during 8 sec. The sequence of the frame events is organized in two distinct clusters (in blue and red). The color gradient represents the progression of time, much faster than the steady velocity \(\bar{v}\). (b): Time evolution of wall position \(R(\theta,t)\) along three directions (the gray shadow indicates the total spreading). After an initial transient of \(t\sim 300s\), the velocity of the wall decays to its steady value (e.g. \(\bar{v}\sim 0.018\,\mu m/s\) for \(H=0.14\)). Inset: Local velocity along three directions. For a given direction \(\theta\) we find the times \(t_{1},t_{2},\dots t_{i}\dots\) where \(R(\theta,t)\) changes values. The velocity \(v(\theta,t)\) for \(t\in[t_{i},t_{i+1}]\) is obtained as \((R(\theta,t_{i+1})-R(\theta,t_{i}))/(t_{i+1}-t_{i})\). The dashed line corresponds to the average steady velocity \(\bar{v}\). Each signal displays intermittency with instantaneous velocity 100 larger than \(\bar{v}\).
is followed by a cascade of frame events on much larger scales. Indeed the simple thermal activation is characterized by incoherent back-and-forth motion representing the attempts to overcome the energy barrier. Here, instead, we observe a coherent forward motion on time scales much faster than the steady velocity. This conclusion is also coherent with the estimation of \(L_{\rm opt}\) given in [5; 10]:
\[L_{\rm opt}\sim L_{C}(H_{\rm dep}/H)^{3/4} \tag{2}\]
with \(L_{C}\) the microscopic Larkin length at which the wall fluctuations become of the order of its thickness. In the materials used in this work, the Larkin length is approximately \(L_{C}\sim 100\) nm. Hence, \(L_{\rm opt}\) is \(\sim 380-400\) nm. This scale is just below the single pixel size of \(400\)nm and is too small to be experimentally accessible. To quantify the spatial correlations observed beyond \(L_{\rm opt}\), we construct clusters of frame events close in space and time via a simple algorithm that depends on two parameters \(\Delta t\) and \(\Delta s\). In practice, we start from an initial frame event (the epicenter of the cluster) and include all frame events within a time window \(\Delta t\) and a distance \(\Delta s\). Section 3 of [22] shows that our analysis is robust upon variations of \(\Delta t\) and \(\Delta s\). Fig. 2 shows the clusters obtained using this procedure. Each cluster can be characterized by two quantities, namely the size \(S\) (the colored areas in Fig. 2) and the longitudinal length \(\ell\) (see section 4 of [22]). Both quantities display scale-free statistics (Fig. 3 (a) and (b)), with exponents which are incompatible with the equilibrium exponents used to characterize the barrier of the energy landscape up to the scale \(L_{\rm opt}\). It is thus tempting to interpret these clusters as avalanches at the depinning transition as suggested by the numerical simulations on directed interfaces in [12]. In those simulations, however, avalanches are very fat in the growth direction (i.e., the direction of propagation of the interface) consistently with the quenched Edwards Wilkinson (qEW) depinning. Here clusters are instead elongated objects as shown in Figs. 2 and 3 (c) where \(S\sim\ell^{1+\zeta}\) results in a roughness exponent \(\zeta\sim 0.63\). This exponent excludes the possibility of qEW depinning but is consistent with the qKPZ depinning. We corroborate this conclusion with an independent study of the roughness of the whole interface. Following the method proposed in [25], we compute the structure factor \(S(q)\) that, as discussed in Ref. [12], displays a \(q^{-(1+2\zeta)}\) dependence at small values of the wave number \(q\). Fig. 4 shows that the interface's roughness exponent \(\zeta\) is consistent with the one characterizing the elongated shape of the cluster. Our results thus prove that the spatial correlations observed beyond the scale \(L_{\rm opt}\) are in the qKPZ depinning. The qKPZ universality class reveals the presence of anisotropic disorder in our experiment. This feature was not included in previous numerical simulations.
## Conclusions
The celebrated creep formula (1) rests on the hypothesis that the key feature determining a wall motion is optimized excitations of size \(L_{\rm opt}\). Our work focuses on intermittently occurring rapid movements along a magnetic wall and unveils their spatial organization, extending on scales much more extensive than \(L_{\rm opt}\). Their size and shape display the same statistics of the avalanches recorded at the depinning magnetic field but with a much slower evolution. In contrast with previous theoretical and experimental studies [26; 9], our experiment shows that the exponents are compatible with the qKPZ instead of the qEW universality class. The
Figure 2: Sequences of clusters at different applied fields, which start from the initial bubble (the central black sector at the inner corner of the images) and grow until the radius is about \(100\,\mu\)m. See also the movie in [22].
emergence of KPZ dynamics at depinning must be sustained by anisotropy in the material, and its origin calls for further understanding.
The scenario emerging from our results should be tested in other examples of elastic disordered systems such as ferroelectric domain walls [27; 28; 29] or crack propagation [30]. Interestingly, a similar scenario was recently reported for a different class of disordered systems, such as amorphous solids or glass-forming liquids. Simulations on elastoplastic models have shown how localized excitations can trigger cascades of faster events [20; 21]. Hence, thermally-facilitated avalanches can be pretty generic in disordered systems. They reveal the complex nature of disordered energy landscapes that cannot be described simply by a sequence of uncorrelated elementary excitations.
The results reported here can also have significant consequences in the field of spintronics. The creep dynamics of a bubble domain is, in fact, at the base of one of the most used methods to determine the interfacial Dzyaloshinskii-Moriya interaction (DMI). This is a chiral interaction responsible for the occurrence of topological spin structures, such as chiral domain walls and skyrmions, considered the most promising information carriers in future spintronics technologies [31]. The determination of the DMI constant is based on the asymmetric expansion of the bubble under an in-plane magnetic field, with the domain wall velocity measured by dividing the displacement between two MOKE snapshots over their time interval. Fig. 1 (b) actually suggests that the velocity is constant only at large times/displacements, and thus that this procedure could be misleading. In addition, theoretical expressions to evaluate the DMI field from the velocity curve are primarily phenomenological, and a more accurate description of the domain wall dynamics, such as the qKPZ reported here, could highly improve the fits of the data. We hope these considerations shed some light on a more accurate determination of DMI value and solve the contradictions with other popular methods, such as the Brillouin light scattering.
_Acknowledgements._ -- V.M.S. acknowledges 80Prime CNRS support for the project CorrQuake. M.B. is supported by research grant BAIE_BIRD2021_01 of the University of Padova.
Figure 3: (a) Cluster size \(S\) and (b) longitudinal length \(\ell\) distributions for different magnetic fields. (c) Cluster size versus their longitudinal length. The clusters have been obtained for \(\Delta t=8\) frames and \(\Delta s=2\) pixels. The first two panels are compatible with qEW and qKPZ universality classes but not with the equilibrium exponents. The value of the roughness exponents from (c) is computed using the power law scaling \(S\sim\ell^{1+\zeta}\). The measured value is compatible with both \(\zeta_{\text{qKPZ}}=0.63\) and \(\zeta_{\text{equilibrium}}=2/3\), but exclude the qEW universality class \(\zeta_{\text{qEW}}=1.25\). Combining these findings leaves the qKPZ universality class as the sole possible candidate for describing the creep motion in our experiment. |
2309.10057 | Hierarchy Builder: Organizing Textual Spans into a Hierarchy to
Facilitate Navigation | Information extraction systems often produce hundreds to thousands of strings
on a specific topic. We present a method that facilitates better consumption of
these strings, in an exploratory setting in which a user wants to both get a
broad overview of what's available, and a chance to dive deeper on some
aspects. The system works by grouping similar items together and arranging the
remaining items into a hierarchical navigable DAG structure. We apply the
method to medical information extraction. | Itay Yair, Hillel Taub-Tabib, Yoav Goldberg | 2023-09-18T18:11:24Z | http://arxiv.org/abs/2309.10057v1 | # Hierarchy Builder:
###### Abstract
Information extraction systems often produce hundreds to thousands of strings on a specific topic. We present a method that facilitates better consumption of these strings, in an exploratory setting in which a user wants to both get a broad overview of what's available, and a chance to dive deeper on some aspects. The system works by grouping similar items together, and arranging the remaining items into a hierarchical navigable DAG structure. We apply the method to medical information extraction.
## 1 Introduction
We are dealing with the question of organizing and displaying a large collection of related textual strings. The need arises, for example, in information extraction or text mining applications, that extract strings from text. Consider a system that scans the scientific literature and extracts possible causes for a given medical condition. Such a system may extract thousands of different strings, some of them relate to each other in various ways,1 and some are distinct. Users consume the list in an exploratory mode Agarwal and Sahu (2021)White and Roth (2008), in which they do not have a clear picture of what they are looking for, and would like to get an overview of the different facets in the results, as well as to dig deeper into some of them.
Footnote 1: Figure 1 lists the kinds of relations between strings.
For example, distinct strings extracted as causes for scaitca include "_herniated disc_", "_herniated disk_", "_lumbar disk herniation_", "_posterior interverbal disc herniation_" and "_endometriosis_", among hundreds of others. The user of this system likes to go over the returned list to learn about possible causes, but going over hundreds to thousands of results is mentally taxing, and we would like to reduce this effort. In the current case, we would certainly like to treat the first two items (_herniated disc_ and _herniated disk_) as equivalent and show them as one unified entry. But we would also like to induce an additional hierarchy. For example, it could be useful to separate all the _herniated disc_ related items (or even all the _disc_ related items) in one branch, and the _endometriosis_ case in another. This will allow the user to more efficiently get a high level overview of the high-level represented topics (_disc herniation_ and _endometriosis_) and to navigate the results and focus on the cases that interest them in the context of the query (for example, they may feel they know a lot about disc-related causes, and choose to ignore this branch).
An additional complication is that the hierarchy we are considering is often not a tree: a single item may have two different parents, resulting in a direct acyclic graph (DAG). For example, arguably a condition like _leg pain_ should be indexed both under _leg_ (together with other leg related items) and under _pain_ (together with pain related items). The hierarchy structure is contextual, and depends on the data: if there are not many other leg related items, it may not be beneficial to introduce this category into the hierarchy.
Additionally, note that some items in the hierarchy may not directly correspond to input strings: first, for the "_leg pain_" example above, if the input list does not include stand-alone _leg_ or _pain_ items, we may still introduce them in our hierarchy. We may also introduce additional abstraction, for example we may want to group "_heart disease_", "_ischemia_", "_hypotension_", and "_bleeding_" under "_cardiovascular disease_".
In this work we introduce a system that takes such a flat list of related strings, and arranges them in a navigable DAG structure, allowing users to get a high level overview as well as to navigate from general topics or concepts to more specific content by drilling down through the graph. Ideally, the graph would allow the user to:
(1) get a comprehensive overview of the various facets reflected in the results;
(2) quickly get an overview of the main aspects of the results;
(3) efficiently navigate the results, finding items in the sub-graph in which they expect to find them.
At a high level, the system works by finding lexically equivalent terms, arranging them in a DAG structure reflecting the specificity relation between terms, further merging equivalent nodes based on a neural similarity model, add additional potential intermediary hierarchy nodes based on taxonomic information and other heuristics, and then pruning it back into a smaller sub-DAG that contains all the initial nodes (input strings) but only a subset of the additional hierarchy nodes. Finally, we select the top-k "entry points" to this graph: high level nodes that span as many of the input nodes as possible. This process is described in section SS3. While the DAG extended with potential hierarchies is very permissive and contains a lot of potentially redundant information, the DAG pruning stage aims to ensure the final graph is as compact and informative as possible.
We focus on causes-for-medical-conditions queries and provide a demo in which a user can select a medical condition, and browse its causes in a compact DAG structure.
To evaluate the resulting DAGs, we perform automatic and manual evaluation. The automatic evaluation is based on measuring various graph metrics. The human evaluation is performed by human domain experts. Our results show that the DAG structure is significantly more informative and effective than a frequency-ranked flat list of results.
## 2 Requirements
As discussed in the introduction, our input is a list of strings that reflect answers to a particular question, as extracted for a large text collection (we focus in this paper on the biomedical domain, and more specifically in causes for medical conditions). This list can be the output of an Open-IE system (Fader et al., 2011; Stanovsky et al., 2015; Kolluru et al., 2020), the results of running extractive QA (Rajpurkar et al., 2016) with the same question over many paragraphs, or extracted using an extractive query in a system like SPIKE (Shlain et al., 2020; Taub Tabib et al., 2020; Ravfogel et al., 2021). The lists we consider typically contain from hundreds to thousands of unique items. We identified a set of relations that can hold between strings in our inputs, which are summarized in Table 1. We would like to arrange these items in a hierarchical structure to facilitate exploration of the result list by a user, and allow them to effectively consume the results. Concretely, the user needs to:
_a. not see redundant information._
_b. be able to get a high-level overview of the various answers that reflected from the results._
_c. be able to get a quick access to the main answers._
_d. be able to dig-in into a specific phenomenon or concept that is of interest to them._
Figure 1: Kinds of possible relations between input strings
e. be able to locate concepts they suspect that exist._
This suggests a hierarchy that respects the following conditions:
_Paraphrased_ spans should be combined into a single group, and _close-meaning_ spans should be combined into the same group; _Elaboration_ relations should be expressed hierarchically; _Co-mention_ spans should be both descendants of the shared concept; _Taxonomic relations_ should (in some cases) be descendants of the taxonomical parent.
Additionally, we would like each node in the hierarchy to have relatively few children (to reduce the need to scan irrelevant items), yet keep the hierarchy relatively shallow (to save expansion clicks if possible). The hierarchical structure should also be informative: we should be able to guess from a given node which kinds of items to expect to find under it, and which kinds of items _not_ to expect to find under it. This means a single item should be lockable in different ways, in case it can be categorized under different keys (we would sometimes like "_brain tumor_" to be listed under _brain_ and sometimes under _tumors_).2
Footnote 2: Arranging information as graphs to facilitate navigation and exploration is, of course, not a novel concept. A notable examples is entailment graphs (Kotlerman et al., 2015; Adler et al., 2012).
## 3 Method
Expanding the initial list.We assume that the strings in the initial list are _maximal_, meaning that the string captures the extracted noun-phrase including all of its possible modifiers. We further expand the list by considering also potential sub-strings of each maximal string, reflecting different granularities. For example, from the string "severe pain in the lower right leg" we would extract "pain", "severe pain", "severe pain in the leg", "severe pain in the lower right leg", among others.3 We then consider the union of the initial set of input strings and the set of additional sub-strings. Different users would be interested in different granularities depending on their information need. We rely on the DAG-pruning stage to properly organize these strings and prune away non-informative ones in the context of the entire set.
Footnote 3: This is done using a rules-based algorithm that operated on the parse tree, which extracted all the distinct modification spans derived from the head token.
Initial grouping into equivalence sets.The input of this stage is a set of strings (the union of the input set and the extended set), and the output is a list of sets, such that the sets are distinct, and their union covers the initial set. For example, after this stage, the items "_herniated disk_", "_herniated disc_", "_disc herniation_", "_herination of the disc_" will be in the same equivalence set.
The grouping in this stage is inspired by (Gashteovski et al., 2017) and is based on considering each string as a bag of lemmas, discarding stop words, modal words, and quantity words, and considering items as equivalent if their bags are equivalent. The lemma matching is relaxed, and allows, beyond exact string match, also matches with small edit distance and matches based on UMLS (Bodenreider, 2004) and WordNet (Miller, 1992) spelling variants and synonyms.
Initial DAG construction.We now take the list of sets from the previous stage, and arrange them into a DAG, where each set is a DAG node. We add a directed edge between two nodes A and B if B _is more specific than_ A, and no other node C is more specific than A and less specific than B.
The _specificity relation_ at this stage is determined based on the bags of lemmas that were used to create the equivalence sets: a set B is more specific than a set A if A and B are not equivalent and the bag of B contains the bag of A.
Adding heads as nodesFor all spans, we take their head-word (either a single adjective or a single noun) and add them as roots of the DAG. We then add an additional root node above them, so that the DAG has a single root. This handles the co-mention relation.
Merging semantically equivalent graph nodes.We now take the DAG and merge equivalent nodes, as determined by a trained statistical model (we use SAP-BERT (Liu et al., 2020))4. For example, this stage will merge "_administration of streptozotocin_" and "_streptomycin injection_". When merging two graph nodes, we handle the corresponding edges in the expected way (the children of the two individual nodes become children of the merged node, and the parents of the individual nodes become the parents of the merged node).5
For a pair of graph nodes A and B, we encode each string in A and in B into a vector using SAP-BERT, and represent each node as the average vector of the strings within it. We go over the nodes in the DAG in DFS order starting from the root nodes, and for each node consider all of its children for potential merging among them. We merge two nodes if the cosine similarity score between their vectors passes the threshold \(t_{1}=0.9\) and their merging does not create a cycle. We then do another pass and merge nodes to direct child nodes if their similarity score is above \(t_{2}=0.95\), again avoiding creating circles.
After this stage, we attempt to further merge nodes based on the UMLS ontology (Bodenreider, 2004). Two nodes A and B are considered UMLS-equivalent, if there is at least one string in node A that is listed in UMLS as a synonym of at least one string in node B. Such cases are merged.6
Footnote 6: If this merging creates a cycle, this cycle is removed.
Adding taxonomic nodes.So far the relationships between nodes in the DAG were solely based on lexical relations. In order to enrich the graph, we introduce additional nodes based on taxonomical relations, which are not reliant on lexical information. For instance, "heart disease", "ischemia", "hypotension", and "bleeding" are under the broader term "cardiovascular disease". We add many nodes here, relying on many of them to be pruned in the next stage.
We map each node to the UMLS hierarchy, and look for UMLS concepts that govern at least two DAG nodes ("descendent DAG nodes"). These are potential abstractions over graph nodes. For each such UMLS concept that is already part of the DAG, it is connected by an edge to all its descendant DAG nodes that do not already have a path to them, if adding such an edge does not create a cycle. For UMLS concepts that are not already in the DAG, they are added as new nodes governing the descendant graph nodes. UMLS concepts have multiple synonyms. When adding them as nodes, we choose the synonym with the highest SAP-BERT cosine similarity to the descendent DAG nodes this concept governs.
DAG Pruning.The DAG at this stage is quite large and messy, containing both nodes containing input strings, as well as additional hierarchy nodes based on linguistically motivated substrings of the input strings, and on taxonomic relations. We prune it to create a smaller graph which is more amenable to navigation. The smaller DAG should contain all the nodes corresponding to input strings, and an effective set of additional hierarchy nodes. Some of the hierarchy nodes are more important than others, as they provide a better differential diagnosis among the answers. Our goal is to highlight these and filter out the less important ones. Operatively, we would like for each node in the graph to have the minimal number of children, such that all the input strings that were reachable from it, remain reachable from it. This focuses on hierarchy nodes that are shared among many input concepts.
We first prune graph edges according to this criteria. This process results in nodes that have a single child. Such nodes are removed, and their children are attached to their parent.7
Footnote 7: Selecting the smallest group of concepts at each hierarchy level is important for user navigation, who quickly become overwhelmed by too many nodes, making it difficult to orient themselves within the DAG.
Selecting the minimal number of children according to this criteria is NP-hard. As an alternative, we use an approximation algorithm called the greedy set cover algorithm (Johnson, 1973), which works by selecting in each step the node with the highest number of non-covered answers, covering them, and proceeding. This helps in choosing the most important concepts and with the highest differential diagnosis.
Entry-point selection.Finally, we seek \(k\) nodes that will serve as the "entry nodes" to the graph. These should be \(k\) nodes that fulfill the following criteria:
a. allow reaching as many input strings as possible.
b. the semantic affinity between a node and the input string reachable by it, is high.
The users will initially see these nodes as well as an additional "other" node, from which all the other input strings can be reached. The entry node labels provide an overview of the \(k\) main concepts in the list, and allow the user to both get an overview of the result as well as to drill down into parts that interest them. Criteria (b) is important to ensure that the user not only can reach the input string by navigating from an entry point, but also that it will _expect_ to find this input string there.
This selection is done by a heuristic algorithm which we adapted from the Greedy+ DAG-node-selection algorithm in (Zhu et al., 2020). It first assigns each node C with a score that combines the
number of the input nodes reachable from it, and the semantic affinity (based on SAP-BERT cosine similarity) of C to each of these reachable nodes. It then iteratively adds the highest scoring candidate C to the set of entry points, and adjusts the scores of each remaining node N by subtracting from the score of N the affinity scores between C and the input nodes reachable from N. We do this until we reach \(k\) entry points.
Visualization.We are now finally ready to show the DAG to the user. For nodes that correspond to multiple (semantic equivalent but lexically different) input strings, we choose one of them as the representative for display purposes.
## 4 Input-output Example
We demonstrate with a minified example. Given the set of spans in Figure (2a), representing causes of chest pain, Hierarchy Builder expands the set by adding the spans "rib fracture" (this is a substring of two existing spans) and "respiratory diseases" (a new taxonomic node). Based on the expanded set of spans in Figure (2b), Hierarchy builder identifies synonymous spans and merges them into the concepts. In Figure (2c) we see these concepts, where each concept includes aliases in parenthesis where applicable. Hierarchy Builder then places the entries in a DAG based on a hierarchy of specificity, as depicted in Figure (2d).
## 5 Experiments and Evaluation
ScopeWe focus on the medical domain and evaluate our system on etiologies (causes) of two medical symptoms ("_jaundice_" and "_chest pain_"). These symptoms were chosen because their are common and each contains many different etiologies mentioned in the literature.
The input lists for the system were the result of running a set of 33 syntactic patterns over PubMed abstracts, looking for patterns such as "COND due to ___" or "patients with COND after ___" where COND is either _jaundice_ or _chest pain_. The results were extracted using the SPIKE system (Shlain et al., 2020; Taub Tabib et al., 2020) and each matched head-word was expanded to the entire syntactic subgraph below it. This resulted in 3389 overall extracted strings and 2623 unique strings for _jaundice_ and 2464 overall and 2037 unique for _chest pain_. After merging strings into synonym sets as described in SS3, we remain with 2227 concepts for _jaundice_ and 1783 for _chest pain_.
For each of the symptoms there are established and widely accepted lists of common etiologies, which we rely on in our evaluation.8 We take 38 established etiologies for jaundice and 33 for chest
Figure 2: Input-Output Example. See section §4.
pain, and check their accessability in the flat list of extracted symptoms, as well as in the hierarchical DAG we create.
Coverage and Entry-point SelectionFor _jaundice_, our input list contains 28 out of the 38 known etiologies, and for _chest pain_ 26/33. With \(k=50\), 25 of 28 concepts are reachable from an entry point for _jaundice_ and 21/26 for _chest pain_. With \(k=100\) the numbers are 28/28 (_jaundice_) and 24/26 (_chest pain_).
Assessing the contribution of the different componentsThe different components in our algorithm contribute by adding nodes, combining nodes, adding edges, and removing edges. Table 1 describes the kind of contribution of each component and quantifies its impact, for each of the two tested conditions.
We now look at the case where we select 50 entry-point nodes, and focus on the effect on the top-level nodes. We see that for Chest-pain, a total of 20 of the 50 selected entry-points were not in the original input, but were added by the various components (12 from expanding the initial list, 5 from adding head words, and 3 from taxonomic words). Similarly, for Jaundice, these components added a total of 29 root nodes (out of the selected 50) that were not in the original input (17 from expanding initial list, 5 from head words and 6 from taxonomic nodes).
The "Expanding the initial list" component plays a significant role in shaping the DAG structure. In Chest Pain, 161 out of 224 internal nodes originate from the expanded list (146 from Expanding the initial list and 15 from co-mention). In Jaundice, 347 out of 423 internal nodes stem from the expanded list (333 from Expanding the initial list and 14 from co-mention). This highlights the substantial impact of this component on the DAG's structure.
The number of merges performed indicates the usefulness of the employed merging methods.
Furthermore, the set cover pruning algorithm effectively reduces the number of edges in the DAG.
Qualitative MeasuresFor _jaundice_, our final DAG contains 2620 nodes overall and has a maximum depth of 11. With \(k=50\) The average number of leaves per entry point is 22.68 (min 0, max 600), and the average depth is 2.86 (min 0, max 9). Most importantly, each internal node has an average of 9.12 children (min 1, max 56, variance 34.91), making them highly browsable.
For _chest pain_, the trends are overall similar: our final DAG contains 2124 nodes overall and has a maximum depth of 9. With \(k=50\) The average number of leaves per entry point is 14.14 (min 1, max 175), and the average depth is 2.8 (min 0, max 7). Each internal node has an average of 4.94 children (min 1, max 53, variance 27.53).
Human evaluationOur main evaluation centers around the effort for an expert9 to locate the known etiologies in the resulting DAG, compared to a flat list sorted by frequency. For each of the etiologies, we ask how many entries need to be considered before finding the etiologies. For the flat list, this means how many items are read when scanning the list in order before reaching the etiology. For the DAG, we count the number of clicks (expansions of a node) starting from \(k=50\) entry points (a quantity that aligns with a reasonable threshold of entry nodes perceivable by a user), while summing also the number of items before the expanded node in each level. Note that since we look for common etiologies rather than rare ones, we would assume a frequency-ranked list based on literature mentions would compare favorably in these measures. Nonetheless, we see a clear benefit of the DAG. We compare to conditions: an ideal condition where the user knows exactly which nodes to expand (blue in the graph), and a realistic scenario, in which the user searches for the etiologies by expanding nodes (gray in the graph).
Footnote 9: We use two experts, each evaluating a different condition. The expert evaluating _jaundice_ is an expert MD specializing in children’s medicine. The expert evaluating _chest pain_ is a PhD in biology with 38 years of biomedical research.
We also perform another evaluation in which we ask the experts to rank each path to an etiology based on its quality, given the question "to what extent is this a logical path to follow in order to find the etiology", on a scale of 1 (very bad) to 5 (very good).
ResultsFigure 3 shows the main results for the two conditions. Despite the frequency-based ranking, many of the etiologies appear relatively low in the flat list, making them very hard to come by in this condition (orange). On the other hand, when considering the DAG, the vast majority of items are significantly easier to locate, requiring scanning significantly fewer items. Only 3 items for jaundice and 2 for chest pain were significantly harder to locate in the DAG than in the flat list. In terms of the quality of the DAG paths associated with each
etiology, the jaundice annotator ranked 23 out of 25 as 5, 1 as a 2, and 1 as a 1. For chest pain, the numbers are 19 out of 21 ranked as 5, 1 as 2, and 1 as 1. Overall, our hierarchy building algorithm works well for the vast majority of the cases, and offers significant benefits over the flat list.
## 6 Conclusions
We presented an automatic method to organize large lists of extracted terms (here, of medical etiologies) into a navigable, DAG-based hierarchy, where the initial layer provides a good overview of the different facets in the data, and each internal node has relatively few items. The code together with a video and an online demonstration are available at [https://github.com/itayair/hierarchybuilder](https://github.com/itayair/hierarchybuilder).
## 7 Limitations
While our method is aimed at organizing any flatlist of extractions, we evaluated it here only on the medical domain, only on a single kind of information need (etiologies), and only for common conditions (jaundice and chest pain). More extensive evaluation over additional conditions is needed in order to establish general-purpose utility. However, we do find the system useful for navigating in automatically-extracted etiology lists, and encourage the readers to experiment with the system also on other conditions, to assess its utility.
There are also some candidates for improving the method also in the biomedical domain, which are not currently handled: (a) abstraction over sub-strings. e.g., for the spans "_administration of penicillin_", "_administration of aspirin_", "_administration of augmentin_", it could be useful to introduce a shared parent level of "_administration of antibiotic/drug_". Our system can currently identify _penicillin_, _augmentin_, _aspirin_ as an _antibiotic/drug_, but cannot handle abstraction over sub-strings. (b) Linking to UMLS currently relies on exact lexical matches, and can be improved.
## 8 Ethical Considerations
We present a system for organizing large result lists into a browsable hierarchy. In general, consuming a hierarchy is more effective than consuming a very
\begin{table}
\begin{tabular}{l l l l}
**Component** & **Contribution** & **Chest-pain** & **Jaundice** \\ \hline Expanding the initial list (for full DAG) & Add nodes & 504 & 893 \\ Expanding the initial list (for DAG with & Add nodes & 158 & 350 \\
50 entry nodes) & & (12 top level) & (17 top level) \\ \hline Adding heads as nodes (Full DAG) & Add nodes & 457 & 379 \\ Adding heads as nodes (50 entry nodes) & Add nodes & 20 (5 top level) & 19 (6 top level) \\ \hline Merging semantically equivalent nodes & Merge nodes & 93 (out of 2556) & 266 (out of 3330) \\ UMLS merging of synonym nodes & Merge nodes & 62 (out of 2504) & 99 (out of 3167) \\ \hline UMLS taxonomic nodes (full DAG) & Add nodes & 113 & 169 \\ UMLS taxonomic nodes (50 entry nodes) & Add nodes & 3 & 6 \\ \hline UMLS taxonomic edges & Add edges & 140 (5 top level) & 153 (3 top level) \\ \hline DAG Pruning & Remove edges & 2363 & 3209 \\ \end{tabular}
\end{table}
Table 1: Quantifying the contribution of the different components.
Figure 3: Effort to reach a set of common etiology items using our created DAG vs. a frequency ranked list. X axes coordinates correspond to different etiologies sorted by their frequency in the input list, and Y axes correspond to the effort. Orange: frequency-ranked flat list. Blue: DAG + oracle locating of items. Gray: DAG + human locating of items.
long list. However, hierarchies can hide items, especially if the items are misplaced in an unexpected branch--which our system sometimes does (albeit rarely). In situations where consuming the entire information is crucial and the cost of missing an item is prohibitive or dangerous, a flat list would be the safer choice.
AcknowledgementsThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
|
2309.09506 | LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language
Models | Graphic layout generation, a growing research field, plays a significant role
in user engagement and information perception. Existing methods primarily treat
layout generation as a numerical optimization task, focusing on quantitative
aspects while overlooking the semantic information of layout, such as the
relationship between each layout element. In this paper, we propose LayoutNUWA,
the first model that treats layout generation as a code generation task to
enhance semantic information and harness the hidden layout expertise of large
language models~(LLMs). More concretely, we develop a Code Instruct Tuning
(CIT) approach comprising three interconnected modules: 1) the Code
Initialization (CI) module quantifies the numerical conditions and initializes
them as HTML code with strategically placed masks; 2) the Code Completion (CC)
module employs the formatting knowledge of LLMs to fill in the masked portions
within the HTML code; 3) the Code Rendering (CR) module transforms the
completed code into the final layout output, ensuring a highly interpretable
and transparent layout generation procedure that directly maps code to a
visualized layout. We attain significant state-of-the-art performance (even
over 50\% improvements) on multiple datasets, showcasing the strong
capabilities of LayoutNUWA. Our code is available at
https://github.com/ProjectNUWA/LayoutNUWA. | Zecheng Tang, Chenfei Wu, Juntao Li, Nan Duan | 2023-09-18T06:35:10Z | http://arxiv.org/abs/2309.09506v2 | # LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language Models
###### Abstract
Graphic layout generation, a growing research field, plays a significant role in user engagement and information perception. Existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationship between each layout element. In this paper, we propose LayoutNUWA, the first model that treats layout generation as a code generation task to enhance semantic information and harnesses the hidden layout expertise of large language models (LLMs). More concretely, we develop a Code Instruct Tuning (CIT) approach comprising three interconnected modules: 1) the Code Initialization (CI) module quantifies the numerical conditions and initializes them as HTML code with strategically placed masks; 2) the Code Completion (CC) module employs the formatting knowledge of LLMs to fill in the masked portions within the HTML code; 3) the Code Rendering (CR) module transforms the completed code into the final layout output, ensuring a highly interpretable and transparent layout generation procedure that directly maps code to a visualized layout. We attain significant state-of-the-art performance (even over 50% improvements) on multiple datasets, showcasing the strong capabilities of LayoutNUWA. Our code is available at [https://github.com/ProjectNUWA/LayoutNUWA](https://github.com/ProjectNUWA/LayoutNUWA).
Graphic layout generation, a growing research field, plays a significant role in user engagement and information perception. Existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationship between each layout element. In this paper, we propose LayoutNUWA, the first model that treats layout generation as a code generation task to enhance semantic information and harnesses the hidden layout expertise of large language models (LLMs). More concretely, we develop a Code Instruct Tuning (CIT) approach comprising three interconnected modules: 1) the Code Initialization (CI) module quantifies the numerical conditions and initializes them as HTML code with strategically placed masks; 2) the Code Completion (CC) module employs the formatting knowledge of LLMs to fill in the masked portions within the HTML code; 3) the Code Rendering (CR) module transforms the completed code into the final layout output, ensuring a highly interpretable and transparent layout generation procedure that directly maps code to a visualized layout. We attain significant state-of-the-art performance (even over 50% improvements) on multiple datasets, showcasing the strong capabilities of LayoutNUWA. Our code is available at [https://github.com/ProjectNUWA/LayoutNUWA](https://github.com/ProjectNUWA/LayoutNUWA).
Figure 1: Overview of LayoutNUWA, in which we view layout generation as a code generation task to enhance the semantic information in layouts as well as naturally harness the hidden layout expertise of large language models. In detail, we propose a Code Instruct Tuning (CIT) approach that consists of three modules: 1) the Code Initialization (CI) module quantifies the numerical conditions and initializes them as an HTML code with masks; 2) the Code Completion (CC) module utilizes the knowledge of large language models to complete the masked portions within the HTML code; 3) the Code Rendering (CR) module directly renders the completed code into the final graphic layout.
## 1 Introduction
Graphic layout, which refers to the organization and positioning of design elements, significantly influences the way users engage with and perceive the presented information (Lee et al., 2020). As a growing research field, layout generation (Li et al., 2019; Yang et al., 2020) aims to create diverse and realistic layouts that streamline the design process and cater to various applications, such as user interfaces (Deka et al., 2017; Jiang et al., 2022), indoor scenes (Di and Yu, 2021; Feng et al., 2023), document layouts (Zheng et al., 2019; Yamaguchi, 2021), presentation slides (Fu et al., 2022), etc.
Current approaches (Jyothi et al., 2019; Li et al., 2019; Arroyo et al., 2021; Zhang et al., 2023a) regard each element in the layout as numerical tuples \((c,x,y,w,h)\), in which \(c\) indicates the element category, \(x\) and \(y\) represent coordinates, \(w\) and \(h\) correspond to width and height. For example, autoregressive-based methods (Yang et al., 2020; Jiang et al., 2022) view the tuple as a sequence and predict their values sequentially, while diffusion-based methods (Chai et al., 2023; Inoue et al., 2023) consider the tuple as a whole and predict their values through a denoising approach. Despite adopting different generative models, all of these methods fundamentally consider layout generation as a numerical tuple optimization task. However, representing layouts as numerical tuples have its limitations, as it primarily focuses on capturing the quantitative aspects of the layout, such as positions and sizes, while lacking semantic information, e.g., the attribute of each numerical value, which may limit the model's ability to capture more complex and rich layout information.
An insightful question emerges from the limitations of existing methods in layout generation: can we integrate semantic information into the layout generation process to enrich the overall representation and enhance the quality of the generated layouts? Addressing this question brings forth two major benefits: firstly, it bolsters the understanding of relationships among various layout elements, and secondly, it enables us to tap into the semantic capabilities of LLMs (Tang et al., 2023), resulting in more intricate and contextually relevant layouts for a wide range of applications (Jiang et al., 2022). Considering the inherent logical nature of layouts, which involve dependency relationships among layout elements, and the fact that each graphic layout can be represented with a fixed structure sequence, code languages emerge as a promising alternative. Code languages can encompass numerical and semantic information while possessing a strong logical foundation (Chen et al., 2022), which can thus bridge the gap between existing methods and the desired enriched representation.
Based on the above observations, we propose LayoutNUWA, a groundbreaking model that revolutionizes the layout generation task by treating it as a code generation task. Our innovative approach is designed to not only enhance the semantic information within layouts but also seamlessly leverage the expertise of LLMs in the layout generation process. To achieve this, we design a Code Instruct Tuning (CIT) approach comprising three interconnected modules: 1) firstly, the Code Initialization (CI) module quantifies the numerical conditions and initializes them as HTML code with strategically placed masks, paving the way for more meaningful and coherent layouts; 2) secondly, the Code Completion (CC) module employs the formatting knowledge of LLMs to fill in the masked portions within the HTML code, thereby harnessing the power of LLMs to improve the accuracy and consistency of the generated layouts; 3) lastly, the Code Rendering (CR) module transforms the completed code into the final layout output, ensuring a highly interpretable and transparent layout generation procedure that directly maps code to a visualized layout.
Experiments across a variety of conditional layout generation tasks on three datasets, i.e., Rico (Deka et al., 2017), PubLayNet (Zhong et al., 2019) and Magazine (Zheng et al., 2019), highlight the superiority of our method, in which LayoutNUWA can significantly outperform all the baselines and shows comparable results with the task-specific models. Furthermore, LayoutNUWA can achieve at least a 50% improvement in performance compared to the best baseline on the low-resource datasets, e.g., the Magazine dataset. In a nutshell, our contributions can be outlined as follows:
* We introduce LayoutNUWA, the first model that treats the layout generation task as a code generation task, effectively harnessing the hidden layout expertise of LLMs.
* We propose Code Instruct Tuning, which empowers the model to adhere to instructions and enriches the semantic information of layout, resulting in precise and standardized code.
* We attain significant state-of-the-art performance on multiple datasets, showcasing the robust capabilities of LayoutNUWA.
## 2 Related Work
### Layout Generation
Automatic layout generation, an important task for automatic graphical design for various scenarios such as document layouts (Zheng et al., 2019; Zhong et al., 2019; Yamaguchi, 2021; Fu et al., 2022), posters (Yang et al., 2016; Guo et al., 2021; Li et al., 2023) and user interface (Deka et al., 2017), has been recently extensively researched. Early approaches for layout generation involve embedding design rules into manually-defined energy functions (O'Donovan et al., 2014; O'Donovan et al., 2015), while other methods have explored generative models such as GANs and VAS for generating numerical graphic and scene layouts, including LayoutGAN (Li et al., 2019), LayoutVAE (Jyothi et al., 2019), LayoutGAN++ (Kikuchi et al., 2021), NDN (Lee et al., 2020) and READ (Patil et al., 2020). Apart from them, transformer-based approaches utilize self-attention mechanisms to learn numerical contextual relationships between elements and achieve layout completion based on partial layout inputs (Yang et al., 2020; Kong et al., 2022; Feng et al., 2023). Recently, with the prevalence of diffusion models, several works also adopted diffusion models to tackle a broader range of conditional layout generation (Chai et al., 2023; Inoue et al., 2023; Zhang et al., 2023; Hui et al., 2023; Cheng et al., 2023). However, existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationship between each layout element. Different from previous works, we convert the layout generation task into the code generation task to directly generate the layout in code language and thus utilize the rich knowledge from LLMs, which can significantly improve the FID by 50% in the Magazine dataset in SS 4.2.
### Instruction Tuning
Instruction tuning represents the process of fine-tuning LLMs on the instruction dataset in a supervised fashion, which narrows the gap between the next-word prediction manner of LLMs and the users' objective of having LLMs adhere to human instructions (Zhang et al., 2023c). Early attempts on instruction tuning involve multi-task training with manually-written descriptions about different tasks (Mishra et al., 2021; Wei et al., 2021; Sanh et al., 2021; Xu et al., 2022; Muenninghoff et al., 2022; Iyer et al., 2022) or automatically generated instructions (Wang et al., 2022; Gu et al., 2022; Zhang et al., 2023b; Honovich et al., 2022a;b). Apart from controlling the LLMs through input instruction, Nye et al. (2021) show that LLM can handle more complex tasks by generating the intermediate steps and Wei et al. (2022) propose chain-of-thought technique by enriching the instruction with intermediate reasoning step descriptions, which endows LLMs with better performance (Wang et al., 2022; Zelikman et al., 2022; Wu et al., 2023; Xu et al., 2023). However, the instruction tuning methods mentioned above are primarily intended for text generation tasks and not ideal for layout generation tasks, which involve numerical optimization. Thus, we propose a code instruction tuning method that is specially designed for layout generation task. Experiments in SS 5.1 indicate that the performance significantly drops if the code instruction tuning is not adopted.
## 3 Methodology
### Problem Formulation
The layout generation task aims to generate a well-organized layout \(\mathcal{S}=\{s_{i}\}_{i=1}^{N}\), with \(N\) representing the number of elements in the layout. Each element, \(s_{i}=(c_{i},x_{i},y_{i},w_{i},h_{i})\), consists of the following components: \(c_{i}\) is the category, \(x_{i},y_{i}\) indicate the center location, and \(w_{i},h_{i}\) represent the width and height, respectively. In this study, we focus on the conditional layout generation task, wherein partial components in \(s_{i}\) are masked with \(M\), and the complete layout \(S\) should be predicted by model \(f_{\theta}\) conditioned on the remaining components \(S_{\backslash M}\):
\[\mathcal{S}=f_{\theta}(\mathcal{S}_{\backslash M}) \tag{1}\]
Previous works (Jyothi et al., 2019; Yang et al., 2020; Inoue et al., 2023) regard each element \(s_{i}\) as a sequence of numerical values, e.g., (0, 10, 20, 25, 30), and train a model to directly generate these values. However, this approach overlooks the semantic information of the components, thus limiting the model's understanding of the layout semantics. Based on this observation, we propose
a new problem definition, where we convert the input \(S_{\backslash M}\) and output \(S\) into a code language and view the layout generation task as a code generation task:
\[\mathrm{CODE}(\mathcal{S})=f_{\theta}(\mathrm{CODE}(\mathcal{S}_{\backslash M})) \tag{2}\]
Eq. 2 has the following 3 advantages compared with Eq. 1:
* **Semantic Insights**: By converting the numerical values into code language, the model can better capture the semantic relationships between different components of the layout.
* **LLM Utilization**: By using code language, the model can further leverage the knowledge of Large Language Models (LLMs) and thus enhance the quality of the generated layouts.
* **Model Scalability**: The code language has a stronger expressive capability compared to numerical values, which allows the addition of more attributes for layout elements.
### Code Instruct Tuning
As shown in Fig. 1, we propose Code Instruct Tuning (CIT) with three modules: (1) _Code Initialization_ module converts layout into masked code language with dynamic templates; (2) _Code Completion_ module inputs the masked code to LLMs to generate complete code; (3) _Code Rendering_ module directly renders code to the final graphic layout. We illustrate these modules below.
#### 3.2.1 Code Initialization
Element QuantizationWe quantify the numerical values of \(i\)-th element position \(\{x_{i},y_{i}\}\) and size \(\{w_{i},h_{i}\}\) in the layout with Adaptive Quantization method (Inoue et al., 2023) that applies \(k\)-Means algorithm (MacQueen et al., 1967) to cluster the position and size information of each element, addressing the highly imbalanced distribution of these values, e.g., elements may overlap or cluster together. Different from the previous works (Chai et al., 2023; Zhang et al., 2023a; Inoue et al., 2023), we use absolute position to represent the coordinates rather than relative positions. This aligns with code language and allows direct rendering of layouts without necessitating coordinate conversion, thereby preventing potential information loss. We maintain precision up to one decimal place and directly convert the clustered results into strings.
Figure 2: The training process of LayoutNUWA, which converts layout generation task to code generation task and utilizes a code instruct tuning to leverage LLM’s capability for layout generation.
Template ConstructionThe overview of template construction is shown in Fig. 2. We construct the templates based on the most common web page layout code, HTML, which contains a wealth of information and is easily accessed by LLMs during the pre-training process (Touvron et al., 2023; Roziere et al., 2023). Specifically, in HTML code, each element is described with a tag that provides information about the content or the element structure. Since the elements in the layout are regular squares, we chose the \(<\)rect\(>\) tag as the content tag to describe each element:
```
\(<\)rectdata-category={\(c_{i}\)x={\(x_{i}\)}y={\(y_{i}\)}width={\(w_{i}\)}height={\(h_{i}\)}>
```
where \(c_{i}\) is the element category in textual format and \(\{x_{i},y_{i},w_{i},h_{i}\}\) are the quantified position and size of the \(i\)-th element. Then, to combine all the elements into a unified structure, we used an opening tag and a closing tag to define the boundaries of each layout, which can be written as:
```
\(<\)html><body><svgwidth={W}height={H}>...</svg></body></html>
```
where \(W\) and \(H\) are the background width and height of the layout.
In order to facilitate better learning of layout in various domains and tasks and leverage the instruction-following capabilities of LLMs, we design the following prompts:
```
Iwanttogeneratelayoutin[Domain]style.Pleasegeneratethelayoutaccordingtothe{TaskCondition}Iprovide:
```
where the {domain} and the {TaskCondition} will vary according to different domains and tasks. For instance, for the RICO dataset, we set Domain as "mobile UI", and for the layout completion task, we set Task Condition as "remaining values". Afterwards, we prepend the task instruction before the layout code.
#### 3.2.2 Code Completion
To construct the conditional input of the layout generation task, we utilize the mask tokens of LLMs to represent the masked values \(M\) and let the model predict the masked values within the HTML code. Different from previous works (Chai et al., 2023; Zhang et al., 2023; Inoue et al., 2023) that applied the customized numerical vocabulary, we employ the LLM's token vocabulary directly. By doing so, we can leverage the knowledge of the numerical tokens inherit in the LLMs. Considering that almost all the LLMs follow auto-regressive generation manner and it brings significant limitation to the layout generation task since the model should predict the same layout under different element orders, even if the layout doesn't have a naturally defined order (Yang et al., 2020). Thus, we design a self-consistency strategy that randomly permutes the order of the input elements in the layout within a mini-batch. Meanwhile, in order to adapt LLMs to different conditional layout generation tasks, we have performed multi-task modeling on the same layout, utilizing various conditions and implementing a joint loss for these tasks. Given the permutation times \(K\) and task numbers \(T\), the joint loss for each layout \(\mathcal{S}\) can be written as:
\[L(\mathcal{S}\mid\theta)=\sum_{t=1}^{T}\sum_{j=1}^{N}\sum_{k=1}^{K}L(s_{j}^{(k) }\backslash M_{j}^{(t)}\mid\theta), \tag{3}\]
where \(\theta\) is the model parameters and \(s_{j}\) denote the \(j\)-th element in the layout \(\mathcal{S}\).
#### 3.2.3 Code Rendering
Most existing works require the extra conversion step to render the graphic layouts (Yang et al., 2020; Chai et al., 2023; Zhang et al., 2023), e.g., converting the relative position to the absolute position, causing the information loss. Different from previous work, LayoutNUWA allows for the immediate rendering as it generate the absolute position directly. Besides, considering the potential output issues such as boundary overflow (Inoue et al., 2023) and format errors, we employ regular expressions to remove mismatched formats and implement clipping operations for elements that exceed the background size.
## 4 Experiment
### Experimental Settings
DatasetWe evaluate the model performance on three widely used public datasets. RICO (Deka et al., 2017) is a user interface design dataset for mobile applications containing 25 element categories and 6K+ UI layouts. PubLayNet (Zhong et al., 2019) consists of 360K+ layouts for documents with 5 element categories. Magazine (Zheng et al., 2019) is a low-resource magazine layout dataset containing around 4K annotated layouts and 6 element categories. We follow LayoutDM (Inoue et al., 2023) to view the original validation data as the testing set and pre-process all three datasets by discarding the layouts containing more than 25 elements as well as splitting the filtered data into the training and new validation sets by 95% and 5%.
Evaluation MetricsWe employ four metrics to evaluate the generation results comprehensively, including Frechet Inception Distance (FID), Maximum Interaction over Union (mIoU), Alignment (Align.), and Overlap. Among them, FID compares the distribution of generated and real layouts. Similar to the previous work (Inoue et al., 2023), we utilize an enhanced feature extraction model for layouts (Kikuchi et al., 2021) to compute the FID score. We measure the conditional similarity between generated and real layouts using mIoU, which is done by calculating the maximum IoU between bounding boxes of generated and real layouts with the same type set. Alignment and Overlap scores are calculated following the previous work (Li et al., 2019) to evaluate proper element alignment and overlapping in a generated layout, and it is worth noting that we ignore normal overlaps, e.g., elements on top of the background, and discard the layouts that failed to generate. For reference, we show the evaluation results between the validation set and test set as Real data.
Tasks and BaselinesWe evaluate LayoutNUWA on three conditional layout generation tasks. These include the Category to Size and Position (C \(\rightarrow\) S+P) task, the Category and Size to Position (C+S \(\rightarrow\) P) task, and the Completion task. More concretely, the C \(\rightarrow\) S+P task requires the model to predict the position and size of the element based on its category. For the C+S \(\rightarrow\) P task, the model predicts the position of the element based on both its size and category. Finally, in the completion task, the element's size and position values are randomly masked up to 80%, and the model predicts the entire layout using the remaining values. We compare LayoutNUWA with six strong baselines, including LayoutTrans (Yang et al., 2020), BLT (Kong et al., 2022), LayoutGAN++ (Li et al., 2019), MaskGIT (Chang et al., 2022), DiffusionLM (Li et al., 2022) and LayoutDM (Inoue et al., 2023).
Implementation DetailsWe implement LayoutNUWA with two 7B LLMs: LLMaMA2 (L2) (Touvron et al., 2023) and CodeLAMa3 (CL) (Roziere et al., 2023). We train LayoutNUWA with two settings: (1) Domain-Specific (DS) setting, where the model is trained on distinct datasets, and (2) Domain-Agnostic (DA) setting, where the model is trained on all three datasets, including RICO, PubLayNet, and Magazine. The default configuration for LayoutNUWA utilizes CodeLAMa (CL) and Domain-Agnostic (DA), i.e., LayoutNUWA-L2-DS. We set permutation times \(K=10\) and task numbers \(T=3\). For model training, we use DeepSpeed Library (Rajbhandari et al., 2020) to run all experiments on 64 NVIDIA V100 GPUs. We apply Top-\(p\) sampling (Holtzman et al., 2019) for inference, where \(p=0.9\) and the temperature is \(0.6\), and set the maximum generation length as 512.
Footnote 2: [https://huggingface.co/meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
### Quantitative Evaluation
We report the model performance on three datasets: the Magazine dataset in Tab. 1, RICO, and PubLayNet datasets in Tab. 2. For the Magazine dataset, LayoutNUWA demonstrates a remarkable performance by significantly surpassing all baseline measures across all tasks. Moreover, it outperforms the strong baseline LayoutDM by more than 50% when assessed with the FID metric.
The significant improvements in Tab. 1 are due to three aspects: 1) previous approaches generated numerical values, while LayoutNUWA generates code with labels, which greatly benefits the model by utilizing the semantic information of layout attributes such as width, height, position, and category; 2) none of the previous methods used LLMs. However, we have introduced LLMs for the first
time, which has resulted in significant performance enhancements, i.e., performance has improved from \(19.206\) to \(9.741\). Furthermore, when we use CodeLLaMA, which is tuned on code language, the performance improves even further to \(8.985\); 3) since different domains require distinct layout formats, early numerical-based methods could only be trained in a domain-specific manner. However, LayoutNUWA is based on code structure, which can be trained in a domain-agnostic manner, allowing for complementary among data from various domains, thus further improving FID to \(8.791\).
We have also conducted extensive experiments on two other datasets: RICO and PubLayNet, as shown in Tab. 2. The LayoutNUWA notably surpasses all baseline methods in the majority of tasks. Although it does not achieve the best performance in two specific tasks, it still secures at least the second-highest performance in those instances. This shows the strong generalization of the LayoutNUWA. It is worth mentioning that our model also achieves closer Align. and Overlap scores to the Real Data compared to the baselines. Although previous work has suggested that refinement and discriminator processes can contribute to improving the Align. and Overlap (Inoue et al., 2023; Li et al., 2019) scores, our method attains better results without employing these steps.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Models**} & \multicolumn{3}{c}{**RICO**} & \multicolumn{3}{c}{**PubLayNet**} \\ & & & & mIoU (\(\uparrow\)) & Align (\(\rightarrow\)) & Overlap (\(\rightarrow\)) & FID (\(\downarrow\)) & mIoU (\(\uparrow\)) & Align (\(\rightarrow\)) & Overlap (\(\rightarrow\)) & FID (\(\downarrow\)) \\ \hline \multirow{8}{*}{**C \(\rightarrow\) S \(\rightarrow\) P**} & LayoutTrans & 0.219 & 0.014 & 13.012 & 11.237 & 0.271 & 0.016 & 3.229 & 38.910 \\ & BLT & 0.203 & 0.013 & 11.743 & 14.260 & 0.232 & 0.009 & 16.742 & 76.499 \\ & LayoutGAN++ & 0.263 & 0.016 & 3.544 & 6.824 & 0.354 & 0.011 & 1.713 & 10.129 \\ & MatchGT & 0.267 & 0.001 & 26.665 & 27.470 & 0.320 & 0.004 & 1.857 & 16.988 \\
**Condition** & DiffusionLM & 0.299 & 0.018 & 17.665 & 31.644 & 0.262 & 0.027 & 3.532 & 20.021 \\
**C \(\rightarrow\) S \(\rightarrow\) P** & LayoutMD & 0.275 & 0.010 & 11.938 & 3.576 & 0.310 & 0.010 & 0.024 & 7.915 \\ \cline{2-11} & LayoutNUA-L2-DS (ours) & 0.351 & 0.002 & 10.109 & 3.728 & 0.337 & 0.009 & 0.028 & 6.986 \\ & LayoutNUA-L2-DA (ours) & 0.386 & 0.011 & 10.214 & 3.010 & 0.324 & 0.011 & 0.077 & 6.890 \\ & LayoutNUA-L2-DA (ours) & 0.377 & 0.009 & 10.263 & 3.706 & 0.376 & 0.278 & 0.083 & 6.715 \\ & LayoutNUA (ours) & **0.445** & **0.004** & **7.943** & **2.524** & **0.385** & **0.001** & 0.086 & **6.975** \\ \hline \multirow{8}{*}{**C \(\rightarrow\) S \(\rightarrow\) P**} & LayoutTrans & 0.311 & 0.011 & 11.902 & 9.368 & 0.315 & 0.013 & 2.531 & 31.627 \\ & BLT & 0.341 & 0.008 & 13.470 & 4.487 & 0.336 & 0.006 & 5.469 & 8.831 \\ & LayoutGAN++ & 0.349 & 0.011 & 26.62 & 6.219 & 0.346 & 0.008 & 2.746 & 9.936 \\ & MatchGT & 0.331 & **0.003** & 26.369 & 12.988 & 0.384 & 0.005 & 1.950 & 5.453 \\ & DiffusionLM & 0.278 & 0.020 & 11.884 & 15.931 & 0.324 & 0.014 & 3.990 & 16.407 \\
**C \(\rightarrow\) S \(\rightarrow\) P** & LayoutMD & 0.391 & 0.009 & 12.072 & **2.285** & 0.381 & 0.010 & 2.041 & 4.175 \\ \cline{2-11} & LayoutNUA-L2-DS (ours) & 0.462 & 0.008 & 10.456 & 30.05 & 0.426 & 0.010 & 1.752 & 4.105 \\ & LayoutNUA-L2-DA (ours) & 0.464 & 0.007 & 10.117 & 27.037 & 0.464 & 0.009 & 1.984 & 3.993 \\ & LayoutNUA-CL-DS (ours) & 0.469 & 0.007 & 9.856 & 2.984 & **0.466** & 0.009 & 1.610 & 4.012 \\ \cline{2-11} & LayoutNUA (ours) & **0.564** & 0.007 & **7.968** & 2.820 & **0.433** & **0.002** & **0.106** & **3.697** \\ \hline \multirow{8}{*}{**C \(\rightarrow\) S**} & LayoutTrans & 0.561 & 0.008 & 10.800 & 3.733 & 0.499 & 0.012 & 2.053 & 8.689 \\ & BLT & 0.471 & **0.007** & 53.658 & 12.110 & 0.157 & **0.002** & 109.483 & 155.157 \\ \cline{1-1} & MatchGT & 0.537 & 0.024 & 9.242 & 33.463 & 0.349 & 0.011 & 4.768 & 120.193 \\ \cline{1-1} & DiffusionLM & 0.218 & 0.021 & **8.681** & 22.220 & 0.332 & 0.012 & 4.406 & 16.576 \\ \cline{1-1} & LayoutNUA-L2-DA (ours) & 0.580 & 0.002 & 15.676 & 3.924 & 0.377 & 0.011 & 1.891 & 7.570 \\ \cline{1-1} \cline{2-11} & LayoutNUA-L2-DS (ours) & 0.610 & 0.009 & 7.239 & 8.875 & 0.407 & 0.010 & 1.337 & 7.337 \\ \cline{1-1} & LayoutNUA-L2-DA (ours) & 0.624 & **0.007** & 10.457 & 8.724 & 0.477 & 0.012 & 1.383 & 7.149 \\ \cline{1-1} & LayoutNUA-CL-DS (ours) & **0.641** & **0.007** & 7.529 & 8.734 & 0.473 & 0.012 & 1.311 & 7.253 \\ \cline{1-1} & LayoutNUA (ours) & 0.616 & **0.007** & 8.123 & **7.542** & **0.481** & 0.009 & **1.292** & **6.929** \\ \hline
**Real Data** & - & 0.438 & 0.004 & 8.706 & 6.25 & 0.691 & 0.001 & 0.039 & 1.85 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison on Magazine dataset, where the bold font denotes the best result and underline represents the second-best performance.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Layout**} & \multirow{2}{*}{**LLM**} & \multirow{2}{*}{**Domain**} & \multicolumn{2}{c}{C \(\rightarrow\) S + P**} & \multicolumn{2}{c}{**C + S \(\rightarrow\) P**} & \multicolumn{2}{c}{**Completion**} \\ & & & & mIoU (\(\uparrow\)) & FID (\(\downarrow\)) & mIoU (\(\uparrow\)) & FID (\(\downarrow\)) & mIoU (\(\uparrow\)) & FID (\(\downarrow\)) \\ \hline LayoutTrans & Numerical & - & Specific & 0.116 & 36.207 & 0.153 & 33.931 & 0.228 & 25.804 \\ BiT & Numerical & - & Specific & 0.087 & 65.372 & 0.126 & 41.089 & 0.103 & 97.142 \\ LayoutGAN++ & Numerical & - & Specific & 0.259 & 16.952 & 0.293 & 11.56
### Qualitative Evaluation
We render the generated layout code with the Code Rendering (CR) method, and Fig. 3 shows the sampled rendering results of the PubLayNet dataset. By comparing with other baselines, we can observe that the layouts generated by LayoutNUWA exhibit excellent element alignment, and the proportion of overlap between elements is minimal. Additionally, our results are the most consistent with the Real Design data, i.e., the size and position of the generated element are essentially consistent with the real design, indicating that by treating the layout generation task as a code generation task, LayoutNUWA has successfully learned the distribution of document layouts, thus result in more precise and realistic layouts. More sampled cases can be referred to Fig. 5.
## 5 Ablation Study
We investigate the effectiveness of the CIT tuning method in Sec. 5.1 and compare the impact of different output formats and fine-tuning in Sec. 5.2. More concretely, we set the LayoutNUWA-L2-DS model as the basic setting and conduct the ablation studies on the Magazine dataset.
### Effect of Tuning Methods
We progressively reduce the modules in CIT and fine-tune the model using the corresponding constructed data. Specifically, we first exclude the code template and directly convert the element information into an ordered sequence \(\mathbf{S}\) with a task instruction before it, i.e., the instruction tuning method. Then, we further remove the task instruction and directly fine-tune the model using data from different tasks separately, i.e., the numerical tuning method. As shown in Tab. 3, we can observe that the model performance has declined significantly without the code template, and it can only work in the DS setting since the model can simply generate repetitive and out-of-order results that are inconsistent with the element sequence in the DA setting. Furthermore, the numerical tuning method can only support the DS setting as there is no task instruction for the model to distinguish between different tasks, and the model performance is far inferior compared to those of the CIT as such an approach overlooks the rich semantic information among the elements and can not calibrate the prior code knowledge of LLMs.
Figure 3: Samples generated by LayoutNUWA on the PubLayNet dataset.
### Effect of Output Format and Finetuning
We compared the effects of the model output in code format and numerical format. For the numerical output format, we designed a Code Infilling task, which involves making the LLM predict only the masked values rather than predicting the entire code sequence. As shown in Tab. 4, we can find that generating in numerical format will increase the failure ratio of model generations, e.g., the model will generate repetitive results, and significantly decrease the model performance. This is because the layout generated by the conditional layout generation task should be logical, while only predicting the masked parts can lead to discrete values that lack logic. Besides, Due to the influence of the autoregressive manner, where the content generated in the next step depends on the previous history, this phenomenon may result in a higher failure probability of model generation when predicting layouts with more masked values. We also conduct a comparison between LayoutNUWA and GPT-4 (Bubeck et al., 2023). Specifically, we allow GPT-4 to perform inference by constructing the input using the CIT method. Tab. 5 shows code instruct tuning for LLM is necessary, as using LLM in a zero-shot manner leads to a high fail rate (100% fail rate of LLaMA2 and around 30% for GPT-4).
## 6 Conclusion
In this paper, we propose LayoutNUWA, a groundbreaking approach that treats layout generation as a code generation task, effectively enriching the semantic information of layouts and leveraging the hidden expertise of LLMs. Extensive experiments on multiple datasets have demonstrated the superiority of our method. This research has the potential to revolutionize the field of layout generation and pave the way for further exploration and development of semantic-aware layout generation approaches in various applications.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Model**} & **Layout Format** & **mIoU (\(\uparrow\))** & **Align. (\(\rightarrow\))** & **Overlap (\(\rightarrow\))** & **FID (\(\downarrow\))** & **Fail (\(\downarrow\))** \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-N & Numerical & 0.000 & 0.000 & 0.867 & - & 78.030 \% \\ C \(\rightarrow\) S + P & LayoutNUWA-L2-DS & Code & **0.260** & **0.021** & **2.898** & **9.741** & **0.000 \%** \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-N & Numerical & 0.000 & 0.000 & 24.959 & 349.231 & 21.717 \% \\
**C + S \(\rightarrow\) P** & LayoutNUWA-L2-DS & Code & **0.358** & **0.020** & **2.483** & **4.682** & **0.000 \%** \\ \hline \multirow{2}{*}{**Completion**} & LayoutNUWA-N & Numerical & 0.000 & 0.000 & 16.602 & - & 29.293 \% \\ & LayoutNUWA-L2-DS & Code & **0.418** & **0.020** & **2.309** & **7.257** & **0.253 \%** \\ \hline
**Real Data** & - & - & 0.348 & 0.016 & 1.521 & 6.695 & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison among different output formats.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Models**} & **Tuning Method** & **mIoU (\(\uparrow\))** & **Align. (\(\rightarrow\))** & **Overlap (\(\rightarrow\))** & **FID (\(\downarrow\))** & **Fail (\(\downarrow\))** \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-L2-DS & CTT & **0.260** & **0.021** & **2.898** & **9.741** & **0.000 \%** \\
**Condition** & w/o template & Instruct Tuning (DS) & 0.124 & 0.049 & 3.221 & 16.324 & 1.020 \% \\ C \(\rightarrow\) S + P & w/o template & Instruct Tuning (DA) & - & - & - & - & 0.000 \% \\ & w/o template/instruct & Numerical Tuning & 0.126 & 0.053 & 3.581 & 17.982 & 3.571 \% \\ \hline \multirow{2}{*}{**Condition**} & LayoutNUWA-L2-DS & CTT & **0.358** & **0.020** & **2.483** & **4.682** & **0.000 \%** \\
**Condition** & w/o template & Instruct Tuning (DS) & 0.182 & 0.021 & 2.673 & 12.432 & 0.000 \% \\
**C + S \(\rightarrow\) P & w/o template & Instruct Tuning (DA) & - & - & - & - & 0.000 \% \\ & w/o template/instruct & Numerical Tuning & 0.189 & 0.024 & 2.892 & 14.326 & 0.000 \% \\ \hline \multirow{2}{*}{**Completion**} & LayoutNUWA-L2-DS & CTT & **0.418** & **0.020** & **2.309** & **7.257** & **0.253 \%** \\ & w/o template & Instruct Tuning (DS) & 0.206 & 0.017 & 2.882 & 15.732 & 5.102 \% \\ \cline{1-1} & w/o template & Instruct Tuning (DA) & - & - & - & - & 6.633 \% \\ \cline{1-1} & w/o template/instruct & Numerical Tuning & 0.214 & 0.020 & 3.003 & 16.243 & 6.122 \% \\ \hline
**Real Data** & - & - & 0.348 & 0.016 & 1.521 & 6.695 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison among different tuning methods, where “Fail” is the failure ratio of generation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & C \(\rightarrow\) S + P & C + S \(\rightarrow\) P & Completion \\ \cline{2-6} & Fail (\(\downarrow\)) & Fail (\(\downarrow\)) & Fail (\(\downarrow\)) \\ \hline LLaMA2 (Zero-Sno) & 100.0 \% & 100.0 \% & 100.0 \% \\ CodeLaMA (Zero-shot) & 100.0 \% & 100.0 \% & 100.0 \% \\ GPT-4 (Zero-Shot) & 34.2 \% & 28.8 \% & 28.5 \% \\ LayoutNUWA & **0.0 \%** & **0.0 \%** & **0.3 \%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with LLMs. |
2309.06153 | Optoelectronic and Transport Properties of Vacancy Ordered Double
Perovskite Halides: A First-principles Study | In the search for stable lead (Pb) free perovskites, Vacancy ordered double
perovskite (VODP), A$_2$BX$_6$ has emerged as a promising class of materials
for solar harvesting owing to their nontoxicity, better stability, and unique
optoelectronic properties. Here, we present the stability and the key physical
attributes of few selected compounds in a systematic manner using
state-of-the-art first-principle calculations. A careful structural and
stability analysis via simulating convex hull and compositional phase diagrams
for different structural prototypes discloses 14 stable and 1 metastable
compounds in this class. The electronic structure calculations using hybrid
functional reveals six compounds to acquire band gap in the ideal visible
region. These six compounds, namely Cs$_2$SnI$_6$, Cs$_2$PdI$_6$,
Cs$_2$TeI$_6$, Cs$_2$TiI$_6$, Cs$_2$PtI$_6$, and Cs$_2$PdBr$_6$, show high
optical absorption ($\approx$ 10$^{5}$ cm $^{-1}$) giving rise to high
spectroscopic limited maximum efficiency, SLME (15-23\%) in the thin-film
thickness range. Close inspection of transport properties reveals polar optical
phonon scattering to be the dominant mechanism limiting the overall mobility.
Further analysis of the polaron excitations discloses the possibility of large
polaron formation at low to moderate defect concentrations. At high defect
concentrations, ionized impurity scattering takes over. This suggests that, a
simulation based guided control of defect concentrations during synthesis can
yield a desired candidate for promissing device application. Additionally, few
selected compounds show moderate to high electron mobility values ($\sim$13-63
cm$^2$V$^{-1}$ s$^{-1}$) at room temperature. Overall, the present study paves
an important path to help design VODP as Pb-free potential candidates for
future optoelectronic applications. | Supriti Ghorui, Jiban Kangsabanik, M. Aslam, Aftab Alam | 2023-09-12T11:53:03Z | http://arxiv.org/abs/2309.06153v1 | Optoelectronic and Transport Properties of Vacancy Ordered Double Perovskite Halides: A First-principles Study
###### Abstract
In the search for stable lead (Pb) free perovskites, Vacancy ordered double perovskite (VODP), A\({}_{2}\)BX\({}_{6}\) has emerged as a promising class of materials for solar harvesting owing to their nontoxicity, better stability, and unique optoelectronic properties. Recently, this class has been explored for a wide range of applications such as photovoltaics, photodetectors, photocatalysis, and light-emitting diodes. Here, we present the stability and the key physical attributes of few selected compounds in a systematic manner using state-of-the-art first-principle calculations. A careful structural and stability analysis via simulating convex hull and compositional phase diagrams for different structural prototypes discloses 14 stable and 1 metastable compounds in this class. The electronic structure calculations using hybrid functional reveals six compounds to acquire band gap in the ideal visible region. These six compounds, namely Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)TeI\({}_{6}\), Cs\({}_{2}\)TiI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\), show high optical absorption (\(\approx\) 10\({}^{5}\) cm \({}^{-1}\)) giving rise to high spectroscopic limited maximum efficiency, SLME (15-23%) in the thin-film thickness range. Close inspection of transport properties reveals polar optical phonon scattering to be the dominant mechanism limiting the overall mobility. Further analysis of the polaron excitations discloses the possibility of large polaron formation at low to moderate defect concentrations. At high defect concentrations, ionized impurity scattering takes over. This suggests that, a simulation based guided control of defect concentrations during synthesis can yield a desired candidate for promissing device application. Additionally, few selected compounds show moderate to high electron mobility values (\(\sim\)13-63 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)) at room temperature. Overall, the present study paves an important path to help design VODP as Pb-free potential candidates for future optoelectronic applications.
## I I. Introduction
Lead halide perovskites(LHP) have reignited immense research interest in the Photovoltaics (PV) community due to their remarkable power conversion efficiency(PCE) of 25.6% [1] (till date) and affordable device processability. The rapid rise in PCE (3.8% to 25.6%) in a short period of time (2009-2021) is attributed to its high absorption coefficient, high charge carrier mobility, defect tolerance, and cost-effective flexible synthesis. Because of their suitable optoelectronic properties, they have also been explored as photodetectors (PD)[2; 3], photocatalysts(PC) [4; 5], and light emitting diodes (LED) [6; 7]. Yet, there remains two major challenges in their large scale scalability: (1) Lead (Pb) toxicity and (2) stability in the ambient environment. At present, major research efforts at laboratory scale have been devoted to overcome these issues without losing their original PV performance. [8; 9; 10; 11; 12] This has led to a detailed exploration of the diverse chemical space of halides perovskites (ABX\({}_{3}\))[13] and their derivatives.[14; 15; 16] Among these perovskite derivatives, three major stoichiometric classes have garnered immense research interest. One of the classes namely double perovskites (DP) with stoichiometry A\({}_{2}\)BB\({}^{2}\)X\({}_{6}\) is mainly generated via transmutation of a combination of trivalent and monovalent elements at B-sites.[17] For example, Cs\({}_{2}\)BiAgBr\({}_{6}\),[17; 18; 19] Cs\({}_{2}\)InAgCl\({}_{6}\),[20] etc. belong to DP class which have been extensively explored for various optoelectronic applications. Similarly, A\({}_{3}\)B\({}_{2}\)X\({}_{9}\) (e.g. Cs\({}_{3}\)Bi\({}_{2}\)I\({}_{9}\)[21], Cs\({}_{3}\)Sb\({}_{2}\)I\({}_{9}\)[22] etc.) and A\({}_{2}\)BX\({}_{6}\) (e.g. Cs\({}_{2}\)SnI\({}_{6}\)[23], Cs\({}_{2}\)TiI\({}_{6}\)[24] etc.) structures are constructed by replacing with trivalent and tetravalent atoms respectively and leaving a vacant B-site. Here, A\({}_{2}\)BX\({}_{6}\) is also called vacancy ordered double perovskite where corner shared alternate BX\({}_{6}\) octahedras are removed along all three directions from the unit cell as shown in Figure 1(a).
In the past few years, vacancy-ordered double perovskite family (A\({}_{2}\)BX\({}_{6}\)) has gradually drawn ample attention in a wide range of optoelectronic applications owing to their better environmental durability, tunable optical and electronic properties. For example, Cs\({}_{2}\)SnI\({}_{6}\) has been studied as a potential candidate in PV[25], LED, PD[3; 26], and PC applications due to its direct band gap nature in the visible range( 1.1-1.62 eV), a high absorption coefficient (\(\approx\)10\({}^{5}\) cm\({}^{-1}\))[23], a low to high carrier mobility (\(\approx\)2-510 cm\({}^{2}\) V\({}^{-1}\) s\({}^{-1}\))[27; 28; 29; 30; 31]. The wide range of measured mobilities of Cs\({}_{2}\)SnI\({}_{6}\) can be attributed to variations resulting from different synthesis and characterization methodologies. Additionally, significant discrepancies have been observed between theoretical and experimental results regarding the transport properties of this material.[16; 28; 32; 33] The intrinsic limitations to mobility in Cs\({}_{2}\)SnI\({}_{6}\) are still not fully understood, and the underlying scattering mechanisms governing carrier transport remain elusive. Therefore, a comprehensive and systematic study encompassing both theoretical and experimental investigations is highly desired to unravel the mobility ambiguity in Cs\({}_{2}\)SnI\({}_{6}\) and shed light on its transport characteristics. As of now, this compound
exhibits a PCE of only 2.1%.[25] In contrast, substitutional alloying in Cs\({}_{2}\)SnCl\({}_{6}\) yields high photoluminescence quantum yield (PLQY) of 95.4% making it promising for further exploration in LED applications.[34] Despite considerable investigation into its structural, electronic, and optical properties, the elucidation of charge-carrier dynamics in Cs\({}_{2}\)SnI\({}_{6}\) still poses challenges that hinder the optimization of conversion efficiencies.
Similarly, Cs\({}_{2}\)TiBr\({}_{6}\), Cs\({}_{2}\)TeI\({}_{6}\), and Cs\({}_{2}\)PtI\({}_{6}\) are also studied experimentally for PV absorbers with their band gaps in the ideal visible range: 1.8, 1.5, and 1.4 eV respectively, along with high absorption coefficients (\(\sim\)10\({}^{5}\) cm\({}^{-1}\)).[35; 36] Here, device efficiency for Cs\({}_{2}\)TiBr\({}_{6}\) as PV absorber is reported to be 3.3%.[37] Indirect band gap and material instability are reported to be responsible for poor PCE in this case. In another report, PV device with Cs\({}_{2}\)PtI\({}_{6}\) shows PCE of 13.88%, which is a remarkable improvement on the reported efficiencies among all the materials belonging to this class till date.[36] Contribution of larger carrier lifetimes along with direct band gap in ideal visible range and robust stability help Cs\({}_{2}\)PtI\({}_{6}\) to attain the high PCE. There are reports of synthesizing Pd[38], and Zr[39] based nanomaterials experimentally but not much has been explored in the direction of optoelectronics. These background clearly indicates that A\({}_{2}\)BX\({}_{6}\) class is extremely interesting and fertile from the application perspective, yet a detailed systematic study on their optoelectronic, carrier transport and phonon properties connecting these observations is lacking. Moreover, it is also noticed that substitutional alloying/doping of pure material is an important strategy to improve optoelectronic properties, which again necessitates an in-depth understanding of the pure materials themselves.
In this communication, we present a detailed and systematic study on the A\({}_{2}\)BX\({}_{6}\) class of materials by using highly accurate ab-initio calculations. First, we have performed a thorough stability analysis which includes choice of different structural prototypes, thermodynamical stability via chemical potential phase diagram and convex hull analysis, and lattice dynamics simulation. Next, we have studied the electronic properties of the stable compounds using hybrid (HSE06) functional, which is known to predict reasonably accurate electronic structure information. Optical absorption and PV device parameters are calculated on the promising set of systems showing band gaps in the ideal visible region. Finally, carrier transport properties of these compounds are studied by considering the important scattering mechanisms. The importance of electron-phonon interactions, calculated within the temperature-dependent Feynman polaron model, is also discussed in some detail. We believe that such an in-depth study not only provides a solid physical basis on these class of semiconductors but will also be immensely beneficial for researchers working on their device application in the field of PV, LED, PD, and PC.
## II II. Structural properties and stability
Vacancy ordered double perovskite, A\({}_{2}\)BX\({}_{6}\) is a class of compounds where alternate BX\({}_{6}\) octahedra are removed from the ABX\({}_{3}\) unit cell as shown in Figure 1(a). In other words, 50% B cations are missing compared to the closed-packed A\({}_{2}\)BB\({}^{\prime}\)X\({}_{6}\) perovskite structure. Here, A possesses +1 oxidation state, B has +4 oxidation state and X is halide anion with -1 oxidation state. In general, for perovskites, different crystal structures are possible depending on the ionic radii of the constituent elements. These structures are roughly dictated by few important geometrical factors as defined below,
* Goldschmidt's tolerance factor: \(t=\left(\mathrm{r_{A}}+\mathrm{r_{X}}\right)/\sqrt{2}\left(\mathrm{r_{B}}+ \mathrm{r_{X}}\right)\)
* Octahedral factor : \(\mu=\mathrm{r_{B}}/\mathrm{r_{x}}\)
* Radius ratio : \(\mathrm{r_{A}}/\left(\mathrm{D_{XX}}-\mathrm{r_{X}}\right)\)
In the above expressions, \(\mathrm{r_{A}}\), \(\mathrm{r_{B}}\), \(\mathrm{r_{X}}\) and \(\mathrm{D_{XX}}\) are the empirical ionic radii of the constituent elements A, B, X and the nearest neighbour X-X bond length, respectively in the A\({}_{2}\)BX\({}_{6}\) structure. All the calculated parameters are tabulated in Table S1 of the supplementary information (SI).[40] The calculated Goldschmidt's tolerance factor predicts formation of cubic structures, which is also consistent with our stability analysis (discussed later) and experimental observations for few of the compounds reported in the literature.[24; 28; 29; 36; 38; 41]
In this work, we have investigated the following A\({}_{2}\)BX\({}_{6}\) compounds: A= Cs; B= Ge, Te, Se, Sn, Pd, Pt, Ti, Zr, Hf; X=I, Br. For each compound, we have considered seven most common structural prototypes (as reported in International Crystal Structure Database (ICSD))[42; 43] for A\({}_{2}\)BX\({}_{6}\) class of compounds. Space group of these seven structures are Fm-3m (cubic), I4/m (tetragonal), I4/mmm (tetragonal), P-3m1 (hexagonal), Pnma (orthorhombic), P4/mnc (monoclinic), and P12\({}_{1}\)/c1 (monoclinic). These crystal structures are shown in Fig. S1 of SL[40] Most of these structures are very similar in symmetry and differ in energy only within a few meV (3-4 meV). Post structural optimization, the lowest energy structure for most of the above set of compounds turns out to be cubic (Fm-3m). It has been observed experimentally that several Cs based iodide and bromide compounds indeed crystallize in the cubic space group.[24; 28; 29; 36; 38; 41]
To further assess the chemical stability, we have calculated the convex hull energies (E\({}_{\mathrm{hull}}\)) of these compounds with respect to possible secondary phases available in ICSD, open quantum materials database (OQMD)[44; 45] and materials project (MP) database[46]. As evident from Fig. 1(b), most of the compounds lie on the convex hull i.e. E\({}_{\mathrm{hull}}\) = 0, except Cs\({}_{2}\)GeI\({}_{6}\), Cs\({}_{2}\)SeI\({}_{6}\)
Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)GeBr\({}_{6}\), confirming the stability of the former. For the remaining four compounds E\({}_{\rm hull}\) ranges between 5-60 meV/atom, indicating likelihood of chemical (meta/in)stability.
Next, in order to explore the most probable secondary phases during synthesis of Cs\({}_{2}\)BX\({}_{6}\) materials, we have calculated the compositional phase diagrams (chemical potential) for materials on the convex hull. More details about the phase diagram calculations/analysis are given in the Sec. S1(A) of SI.[40] Figure 1(c-g) shows the phase diagrams for those materials which can be potential for optoelectronic applications (based on their band gap, optical and transport properties, as discussed later). The phase diagrams for remaining compounds are displayed in Figure S2 of SI.[40] The green shaded portion shows the stability regions of these materials. The extent of the stability region directly correlates with the ease/difficulty of experimental synthesis.The theoret
Figure 1: (a) Crystal structure of A\({}_{2}\)BX\({}_{6}\) compounds in Fm-3m space group. (b) Convex energy hull (E\({}_{\rm hull}\)) for Cs\({}_{2}\)BX\({}_{6}\) (B= Pd, Pt, Ti, Hf, Zr, Ge, Te, Se, Sn; X=I, Br). Red circle denotes the top of the bar. Compositional chemical potential phase diagram of (c) Cs\({}_{2}\)Ti\({}_{6}\), (d) Cs\({}_{2}\)PdBr\({}_{6}\), (e) Cs\({}_{2}\)PtI\({}_{6}\), (f) Cs\({}_{2}\)SnI\({}_{6}\), and (g) Cs\({}_{2}\)TeI\({}_{6}\) with respect to competitive secondary phases. The green shaded regions show the stable regions of the corresponding materials.
ically optimized lattice parameters and bond lenghts (B-X) of all the stable compounds in their cubic structure are displayed in Table S2 of SI.[40]
Further, we have checked the dynamical stability of these compounds by calculating phonon dispersions as shown in Figure S3 of SI.[40] The absence of any visible imaginary phonon modes indicates dynamical stability of these compounds. For Cs\({}_{2}\)SnI\({}_{6}\) and Cs\({}_{2}\)TiI\({}_{6}\), one can observe small negative phonon frequencies the magnitude of which decreases with increasing supercell size. This is because the later captures the effect of higher order inter-atomic force constants more accurately. This is also evident in previously reported phonon dispersion for Cs\({}_{2}\)SnI\({}_{6}\).[47, 48] Nevertheless, these compounds are already experimentally synthesized, and hence naturally stable.[28, 49]
Following stability analysis, we have further studied the electronic structures of 15 (14 stable and 1 metastable) compounds in the next section.
## III III. Electronic structure
Band structure calculations for all the compounds are initially performed using Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional[50]. As PBE functional is well-known to underestimate the band gap, we also employ hybrid Heyd-Scuseria-Ernzerhof (HSE06)[51] functional which gives more accurate estimate of band gap in comparison to experiment. Spin-orbit coupling (soc) effect in included in all the calculations. Band structures for four potential compounds calculated using PBE+soc functional (band gaps are scissor shifted to HSE+soc values) are shown in Figure 2. In A\({}_{2}\)BX\({}_{6}\) class of compounds, the topology of band structure calculated using HSE+soc functional is very similar to that calculated using PBE+soc functional except the enlargement of band gap in the former (See Figure S4 of SI for few representative cases).
Figure 2 also shows the optical transition probability, (square of dipole transition matrix elements, p\({}^{2}\)), and total/orbital projected density of states (PDOS) of Cs\({}_{2}\)Ti\({}_{6}\), Cs\({}_{2}\)PdBr\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) respectively. The HSE06+soc band gap values for the respective compounds are provided in Table 1. The band structure, PDOS and respective band gap values for other compounds are provided in Figure S5-S7 and Table S3 and S4 of SI.[40] In Fm-3m phase, the estimated band gap values lie within 0.72 eV to 4.31 eV for different compounds. Optical transitions at the fundamental direct gaps are dipole forbidden for all the compounds, as confirmed by the calculated optical transition probability (p\({}^{2}\)). Here the presence of inversion symmetry plays the key role to induce parity forbidden transitions for these compounds, effectively increasing the optical band gap.
In the present study, we considered 9 different elements at the B site, belonging to 4 distinct groups in the periodic table. Despite all elements having a +4 oxidation state, their valence electron orbital configurations differ, resulting in distinct electronic structures, including variations in band structure and band gap types among the compounds. In the following, we shall discuss the electronic structure of representative compounds from each group and compare them with the electronic structures of other compounds within the same group, including different halides.
For Cs\({}_{2}\)TiI\({}_{6}\), band gap is indirect in nature with conduction band minimum (CBM) at X and valence band maximum (VBM) at \(\Gamma\). But the direct band gap at \(\Gamma\) is very close to indirect band gap value (\(\sim\)50 meV) (Table 1). From the orbital projected density of states (PDOS), we observe that CBM is comprised of Ti-d and I-p i.e. B-d and X-p (see Figure 2)(a,b)). The electronic band gap value is 1.77 eV which is overestimated by 0.75 eV with respect to experimental value (1.02 eV)[24]. The calculated optical band gap lies within 100 meV from the fundamental direct gap. Apart from that, the large difference between the calculated electronic band gap and optically measured experimental band gap can be attributed to the excitonic effect (not taken into account here) and the defects present in the measured sample, as discussed by B.Cucco et.al.[52]. All the electronic structure information for the rest of the compounds can be found in Figure S5-S7 and Table S3 and S4 of SI.[40] It is clearly evident that the band gap increases from Ti \(\rightarrow\) Zr \(\rightarrow\) Hf and also with I \(\rightarrow\) Br. In this group, only Cs\({}_{2}\)TiI\({}_{6}\) shows band gap in the ideal visible region.
Cs\({}_{2}\)PdI\({}_{6}\) shows indirect band gap in both the space groups with CBM at X and VBM at \(\Gamma\) (see Fig. S5 of SI). The optically allowed direct band gap (0.88 eV) is very close to the indirect band gap values (0.72 eV) (shown in Table 1 ). Experimentally, Cs\({}_{2}\)PdI\({}_{6}\) nanocrystals[41] are synthesized and and a band gap of 0.69 eV is reported. The reason behind the overestimation might be similar to what is explained for the case of Cs\({}_{2}\)TiI\({}_{6}\). In this case, the CBM is comprised of Pd-d, I-p orbitals while VBM is composed of only I-p orbital (see Fig. S5 of SI). Like Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)PtBr\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) show similar orbitals contribution at both CBM and VBM giving rise to indirect nature of band gap. Their band gap values along with the formation energetics and different between direct and indirect band gaps are presented in SI (see Tables S3 and S4 and Fig. S5 and Fig. 2(c,e). For Cs\({}_{2}\)PtI\({}_{6}\) and Cs\({}_{2}\)PdBr\({}_{6}\), the calculated band gap is close to experimentally reported values of Cs\({}_{2}\)PtI\({}_{6}\) powder [53, 54] and Cs\({}_{2}\)PdBr\({}_{6}\) nanocrystals [38, 41] respectively. Here, we observe an increase in band gap going from Pd \(\rightarrow\) Pt and also from I \(\rightarrow\) Br. In this case, Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) compounds show band gaps within the ideal visible range.
The band structure analysis of Cs\({}_{2}\)TeI\({}_{6}\) reveals that it has indirect band gap with a value of 1.85 eV, consistent
with the study by Maughan et.al.[28]. From PDOS analysis, we observe that the CBM is comprised of Te-p and I-p orbitals whereas VBM is made up of I-p orbital (see Figure 2(i) and (j)). The calculated electronic band gap value of 1.85 eV is 0.26 eV higher than the experimentally reported value [28]. For Cs\({}_{2}\)TeBr\({}_{6}\), the band gap nature and orbital contribution at both CBM and VBM is similar to that of Cs\({}_{2}\)TeI\({}_{6}\). The related electronic properties can be found in SI (see Table. S4 and Fig. S6 ). All the electronic structure information for Cs\({}_{2}\)SeBr\({}_{6}\) can be found in SI (see Table S4 and Figure S6 (c,d) ), which shows similar orbital characteristics.
For Cs\({}_{2}\)SnI\({}_{6}\), the calculated band gap value is 0.85 eV which is direct in nature with band edges (at \(\Gamma\)) in agreement with the values reported by Maughan et.al.[28] This is 0.38 eV higher than the experimentally reported value [28]. From orbital analysis, we observe that the CBM is made up of Sn-s and I-p orbitals, and VBM is comprised of I-p orbital (see Fig. 2(g,h)). For Cs\({}_{2}\)SnBr\({}_{6}\), the band gap nature and orbital contribution at both CBM and VBM remains similar to that of Cs\({}_{2}\)SnI\({}_{6}\). The related electronic properties can be found in SI (see Table S4 and Fig. S6(a,b)).
To summarize the electronic properties, one should note that the calculated electronic band gaps are always overestimated as compared to the experimentally reported optical band gap which is a well-known fact.[52] Contrary to previous reports, the optical band gaps are close to the lowest direct band gaps, confirmed by our calculation of optical transition probability. We believe that the most probable reasons for the theoretical overestimation of band gaps can be attributed to the excitonic effects (not included in the present calculations) and defects present in the experimental samples, as discussed by B.Cucco et.al.[52].
From the electronic structure analysis, we notice that
Figure 2: Band structures and the square of dipole transition matrix elements (p\({}^{2}\)) between VBM and CBM, for (a) Cs\({}_{2}\)TiI\({}_{6}\), (c) Cs\({}_{2}\)PdBr\({}_{6}\)(e) Cs\({}_{2}\)Pt\({}_{6}\), (g)Cs\({}_{2}\)SnI\({}_{6}\), and (i) Cs\({}_{2}\)TeI\({}_{6}\) respectively. (b), (d), (f), (h) and (j) show the projected density of states (PDOS) for the same set of compounds respectively. All the calculations are done using PBE functional including spin-orbit coupling (soc) effect while band gap is scissor shifted to HSE+soc calculated values. In band structure plots, VBM and CBM are indicated via green and red circles respectively.
the band gaps of Cs\({}_{2}\)TeI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\) (Pnma), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)TiI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) lie in the ideal visible for photovoltaic application. Therefore, we shall now focus on the optical properties of these six compounds along with the well-known descriptor 'Spectroscopic limited maximum efficiency' (SLME) (as proposed by Yu et.al[55]) to better understand their potential as solar absorber.
## IV IV. Optical Properties
Figure 3(a) shows the absorption coefficients for the above mentioned six promising compounds. All these compounds can act as potential solar absorber as their absorption coefficients are as high as 10\({}^{4}\)-10\({}^{5}\) cm\({}^{-1}\) in the visible range. The optical absorption is contributed by two factors: (1) optical joint density of states (JDOS) and (2) optical transition strength. As we can see in Figure S5(a-d),[40] the square of dipole transition matrix elements (aka transition strength) for Cs\({}_{2}\)PdI\({}_{6}\) is pretty high contributing to better optical absorption. This can be attributed to Cs-p, I-p to Pd-d transition. Apart from that, the JDOS is also likely to be high as the bands near CBM and VBM show flat nature. They are comprised of 'p' and 'd' orbitals, showing more localized nature as compared to the other compounds. In addition to Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) also show similar absorption coefficient spectrum. The absorption coefficient is directly related to the frequency dependent dielectric function of a semiconductor via following equation
\[\alpha(E)=\frac{2\omega}{c}\sqrt{\frac{\sqrt{\epsilon_{re}^{2}+\epsilon_{im}^ {2}}-\epsilon_{re}}{2}} \tag{1}\]
where \(E\) is the incident photon energy, \(\omega\) is the angular frequency related to \(E\) via \(E=\hbar\omega\), \(c\) is the velocity of light, \(\epsilon_{re}\) and \(\epsilon_{im}\) are the real and imaginary part of the dielectric function respectively. Figure 3(b) shows the thin-film thickness dependance of spectroscopic limited maximum efficiency (SLME), which turn out to be?15% for all six compounds. Interestingly, we can see a higher SLME for the Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\) and Cs\({}_{2}\)TiI\({}_{6}\) compounds as compared to \(Cs_{2}SnI_{6}\). Such increased SLME is essentially attributed to increased high absorption spectra due to I-p to Pd/Pt/Ti-d orbital transition as well as suitable band gaps. In Table 1, we present the simulated device parameter values for the six compounds: short-circuit photo-current density (J\({}_{\rm sc}\)), open circuit voltage (V\({}_{\rm oc}\)), fill factor (\(FF\)), maximum current density (J\({}_{\rm max}\)), and maximum voltage (V\({}_{\rm max}\)) obtained from the SLME calculation. The detailed description and method of calculation of these parameters can be found in the SL[40] As expected, materials with higher band gaps exhibit higher V\({}_{oc}\) values, while materials with lower band gaps acquire higher J\({}_{sc}\) values. The other compounds do not have SLME as high as these six materials owing to higher band gaps. Their absorption coefficients and SLME values are shown in Figure S8 and S9 of SI respectively.[40] Furthermore, it is worth noting that the band gaps of other compounds are distributed within the visible range (larger than 1.8 eV), which makes them suitable for LED and photocatalytic water splitting applications. Alloying at B and x sites are other avenues to tune the optoelectronic properties of these systems and hence make them suitable for different applications.
## V V. Transport properties
Detailed analysis of optoelectronic properties and calculation of solar efficiency (SLME) reveals six compounds to be promising. This can be attributed to their optimal band gaps falling within the ideal visible region of the solar spectrum coupled with their excellent absorption coefficients. However, in a practical photovoltaic device, extraction of charge carriers is one of the key component determining its power conversion efficiency. As such, mobility of the charge carriers is an integral quantity dictating the promise of a semiconductor for solar harvesting. Most of the past theoretical studies on photovoltaic materials rely on calculation of transport properties based on constant relaxation time approximation (RTA). Within this approximation, all the scattering mechanisms are averaged out via a single relaxation time (chosen to be approximately 10 fs). This practice, however, can be misleading as the carrier relaxation time is a complex parameter which sensitively depends on a number of physical properties and can be significantly different for different materials belonging to the same class (as illustrated in this study). In this section, we perform a thorough analysis of the carrier mobilities of these compounds considering three relevant scattering mechanisms, namely, acoustic phonons (ADP), ionized impurities (IMP), and polar optical phonons (POP) scattering. We have excluded piezoelectric scattering due to the inherent centro-symmetry present in these compounds. In Figure 4 and 5, we show the temperature and defect concentration dependence of electron and hole mobilities (\(\mu_{e}\) and \(\mu_{h}\)) for these compounds. Contribution of individual scattering mechanisms on these mobilities for the six compounds are provided in Figure S10 to S15 of SI.[40]
Figure S16 of SI[40] displays the total relaxation times of six compounds at varying defect concentrations, ranging from 10\({}^{10}\) cm\({}^{-3}\) to 10\({}^{20}\) cm\({}^{-3}\) at three different representative temperatures (100 K, 300 K, and 500 K) for both hole and electron transport. For defect concentrations in the low to moderate range, the relaxation times remains almost constant. However, as the defect concentration increases, relaxation times vary in an irregular manner. To comprehend the cause of this anomalous behavior, a more in-depth analysis was conducted. The relaxation times for all three scattering mechanisms were calculated for each compound, and plotted in Figures S17 to S22.[40] A close inspection of these data confirms that in the low to moderate concentration range, the primary scattering mechanism is due to POP scattering. In contrast, as the concentration increases into the higher range, the dominant scattering mechanism shifts to IMP scattering, resulting in the emergence of anomalous behavior. Such unusual behavior is also reflected in the mobility shown in Fig. 4.
Speaking about the behavior of mobilities for each individual compounds, one can notice that at low temperature (100 K), the hole mobility (\(\mu_{h}\)) is highest for Cs\({}_{2}\)TiI\({}_{6}\) (\(\sim\)20.9 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)). With increasing temperature, \(\mu_{h}\) decreases slowly to reach a shallow minimum and increases again with increasing defect concentration. At
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Compound & \(E_{g}^{(expj)}\) & \(E_{g}\)(HSE+soc) & \(\Delta E_{g}^{da}\) & J\({}_{SC}\) & J\({}_{max}\) & V\({}_{OC}\) & V\({}_{max}\) & FF & SLME \\ & (eV) & (eV) & (meV) & (mA cm\({}^{-2}\)) & (mA cm\({}^{-2}\)) & (V) & (V) & & (\(\eta\%\)) \\ \hline \hline Cs\({}_{2}\)TiI\({}_{6}\) & 1.02[24] & 1.77 (ID) & 72 & 16.64 & 16.34 & 1.50 & 1.39 & 0.91 & 22.78 \\ \hline Cs\({}_{2}\)PdBr\({}_{6}\) & 1.6[38], 1.69 [41] & 1.61 (ID) & 110 & 17.62 & 17.26 & 1.35 & 1.25 & 0.91 & 21.63 \\ \hline Cs\({}_{2}\)PtI\({}_{6}\) & 1.25[53], 1.37 [54], 1.4 [36] & 1.31 (ID) & 149 & 22.23 & 21.66 & 1.08 & 0.98 & 0.89 & 21.26 \\ \hline Cs\({}_{2}\)SnI\({}_{6}\) & 1.25 [28], 1.62 [29] & 0.87 (D) & 41 & 32.58 & 31.33 & 0.72 & 0.64 & 0.85 & 20.07 \\ \hline Cs\({}_{2}\)TeI\({}_{6}\) & 1.59 [28] & 1.85 (ID) & 190 & 12.95 & 12.72 & 1.55 & 1.45 & 0.92 & 18.44 \\ \hline Cs\({}_{2}\)PdI\({}_{6}\) & 1.41 [41] & 0.72 (ID) & 166 & 43.20 & 40.94 & 0.54 & 0.46 & 0.81 & 18.97 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulated band gap (\(E_{g}\)), difference between electronic and optically allowed direct band gap (\(\Delta E_{g}^{da}\)), short-circuit current density (J\({}_{SC}\) ), open-circuit voltage (V\({}_{OC}\) ), current density (J\({}_{max}\) ) and voltage (V\({}_{max}\) ) at maximum power, spectroscopic limited maximum efficiency (SLME), and fill factor (FF) for Cs\({}_{2}\)TiI\({}_{6}\), Cs\({}_{2}\)PdBr\({}_{6}\) Cs\({}_{2}\)PtI\({}_{6}\) Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)TeI\({}_{6}\), and Cs\({}_{2}\)PdI\({}_{6}\) compounds. ID and D indicates indirect and direct nature of band gaps respectively. All the device-related parameters are shown for 500 nm thickness at 298K. Experimental band gaps (\(E_{g}^{(expt)}\)) are also listed for comparison.
higher temperature, Cs\({}_{2}\)PdI\({}_{6}\) shows the highest hole mobility (\(\sim\)5.9 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @300 K and \(\sim\)4.2 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @500 K ) among all the compounds. This compound also shows the highest electron mobility (\(\sim\)183 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @ 100 K, \(\sim\)51 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @ 300 K, and \(\sim\)32 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\) @ 500 K) throughout the temperature range. At room temperature, the hole mobilities remain relatively low but except Cs\({}_{2}\)TiI\({}_{6}\), electron mobilities show moderate to high values (\(\sim\)13-63 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)). This is commensurate with the electronic band structures of these compounds where the VBM shows flat bands whereas the CBM is more dispersive. Consequently, n-type doping could prove advantageous for efficient charge carrier collection in photovoltaic devices, aligning with the experimental findings for this class of compounds.[27; 29; 56] A closer look at the individual contributions from the different scattering mechanisms show that at low to moderate defect concentrations (\(<10^{18}\) cm\({}^{-3}\)), POP scattering is the dominant scattering mechanism limiting the mobilities. With increasing temperatures, the number of activated polar optical phonons increase, and as a result we see a decrease in overall mobility going from 100 K \(\rightarrow\) 300 K \(\rightarrow\) 500 K. At higher defect concentrations( 10\({}^{18}\)-10\({}^{20}\) cm\({}^{-3}\)), we see ionized impurity scattering begins to dominate as can be seen from Figures S10(a,b,c)-S15(a,b,c) of SI.[40] At these concentrations, there is one more mechanism that starts to impact the carrier mobility, which is the screening of polar optical phonons by free carriers. This in effect reduces the POP scattering, effectively increasing the overall mobility in some cases. Now, the temperature has also an effect on this screening mechanism. At higher temperatures, there are more activate polar optical phonons, which require a higher density of free carriers to effectively screen the Coulomb field created by these phonons. This is clearly evident from our SI plots (see Figures S10(a,b,c)-S15(a,b,c)).[40] In all the cases, ADP scattering remains low which is common in hybrid perovskites arising out of small deformation potentials.[57; 58]
In Figure 5(a-f), we show average hole and electron mobilities with respect to temperatures ranging from 100 K to 500 K for three different defect concentrations, low (10\({}^{10}\) cm\({}^{-3}\)), moderate (10\({}^{15}\) cm\({}^{-3}\)) and high (10\({}^{20}\) cm\({}^{-3}\)). Due to weak dependence on IMP scattering in low to moderate defect concentrations, we see the carrier mobility remains similar in these two defect concentrations. But we can see that at higher concentrations, IMP starts to dominate. As such, controlling the defect concentrations can impact device efficiencies, not only because at higher defect concentrations, IMP becomes the dominant scattering mechanism, but also because the prevalence of free carriers will start to screen the POP scattering effect. As expected, the overall mobility has a strong temperature dependence for most of the compounds and remains high to moderate for the electrons whereas the hole mobility values remain consistently low.
Figure 4: (a,b,c) Hole mobility (\(\mu_{h}\)) and (d,e,f) electron mobility (\(\mu_{e}\)) for Cs\({}_{2}\)TiI\({}_{6}\), Cs\({}_{2}\)PdBr\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) compounds as a function of defect concentrations at three different temperatures, T= 100K, T=300 K and T= 500K respectively.
The above analysis reveals that in A\({}_{2}\)BX\({}_{6}\) class, polar optical phonons play a dominant role at the realistic defect concentrations relevant for photovoltaic application. As such, next we study the properties of the polaronic states via calculating the Fr\(\ddot{o}\)hlich interactions under the temperature-dependent Feynman polaron model.[59][Reference] In polar semiconductors, for example, halide perovskite and its derivatives, the interaction between charge carriers and the macroscopic electric field generated by longitudinal optical phonon (LO) is well known to be the dominant scattering mechanism near room temperature which is expected to be the case for our studied materials as well.[56; 57; 60; 61; 32] To investigate the same, we studied the influence of changing B-site in A\({}_{2}\)BX\({}_{6}\) on the electron-phonon coupling (EPC). Within the Fr\(\ddot{o}\)hlich interaction model, the interaction strength (\(\alpha\)) is defined as
\[\alpha=\frac{1}{4\pi\epsilon_{0}}\frac{1}{2}\left(\frac{1}{\epsilon_{\infty}}- \frac{1}{\epsilon_{static}}\right)\frac{e^{2}}{\hbar\omega_{LO}}\left(\frac{2 m^{*}\omega_{LO}}{\hbar}\right)^{1/2} \tag{2}\]
were, \(\epsilon_{0}\) is dielectric constant of vacuum, \(\epsilon_{\infty}\) and \(\epsilon_{static}\) are high frequency and static dielectric constants of the semiconductor, \(\hbar\) is the reduced Plank constant, \(\omega_{LO}\) is the characteristic angular LO frequency where all the infrared active optical phonon branches are taken into account via a spectral average,[60]\(m^{*}\) is the carrier effective mass. Table 2 display all the associated values related to Fr\(\ddot{o}\)hlich interaction for electrons for the six compounds. The corresponding list of parameters for the holes for these six compounds are reported in Table S5 of SM.[40] To validate our simulation, we compare the calculated values of \(\alpha\) for Cs\({}_{2}\)SnI\({}_{6}\), with recent literature and observe a fair agreement.[32] In case of A\({}_{2}\)BX\({}_{6}\) class, calculated \(\alpha\)-values lie in the moderate range (1\(<\alpha<\) 6). Estimated values of polaron radius (l\({}_{p}\)) indicates formation of large polarons, similar to what is observed for hybrid halide perovskites and double perovskites.[60; 61; 62]\(\alpha_{e}\) value is highest for Cs\({}_{2}\)TiI\({}_{6}\) mainly due to higher electron effective mass compared to the other compounds. Additionally, taking an inference from the electronic structure of these materials, we see that CBM in Cs\({}_{2}\)TiI\({}_{6}\) has a contribution from Ti-d and I-p orbitals whereas for Cs\({}_{2}\)SnI\({}_{6}\), it is Sn-s and I-p orbitals. Now, Ti-d orbitals are more localized arising out of the flat band and hence higher effec
tive mass. For other compounds, we see more dispersive bands (see Figure 2) at CBM, and the corresponding \(\alpha\) values are in the range close to that of Cs\({}_{2}\)SnI\({}_{6}\). Interestingly, the hole mobility turn out to be significantly lower than the electron mobility. To conclude, large polaron is the main carrier related to moderate mobility for our studied compounds. These crucial observations clearly indicates the importance of studying charge carrier behavior in A\({}_{2}\)BX\({}_{6}\) class of compounds and its implications in future applications.
## VI VI. Conclusion
In summary, we performed an accurate and systematic investigation of Pb-free vacancy ordered double perovskites (A\({}_{2}\)BX\({}_{6}\)) from the optoelectronic application perspective. We carried out a thorough stability analysis considering different structural prototypes and carefully simulating the convex hull energy diagram including all possible secondary phases. We found 14 compounds to be stable and 1 in the metastable phase. For stable compounds, we further simulated the compositional phase diagrams to assist the experimentalists identifying the most probable secondary phases which might emerge during synthesis. Next, a careful electronic structure analysis reveals six compounds, namely Cs\({}_{2}\)TeI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)TiI\({}_{6}\), and Cs\({}_{2}\)PdBr\({}_{6}\) to possess optically allowed band gaps in the ideal visible range (0.8-1.85 eV). The detailed investigation of optical properties confirms that few of these compounds possess favorable optoelectronic properties facilitating better efficiency than some of the existing ones. A close inspection of transport properties reveals that Cs\({}_{2}\)PdBr\({}_{6}\), Cs\({}_{2}\)PtI\({}_{6}\), Cs\({}_{2}\)SnI\({}_{6}\), Cs\({}_{2}\)PdI\({}_{6}\), and Cs\({}_{2}\)TeI\({}_{6}\) compounds acquire moderate to high electron mobilities (\(\sim\)13 - 63 cm\({}^{2}\)V\({}^{-1}\) s\({}^{-1}\)). In all the cases, polar optical phonons (POP) remain the dominant scattering mechanism at low to moderate defect concentrations. At high defect concentrations, ionized impurity scattering starts to dominate while accumulation of free carriers shows a screening effect on the POP scattering. This study is expected to facilitate the necessary base and guidance for future experimental synthesis of some of these compounds to achieve desired features for promising device applications.
## VII VII. Computational details
First-principles calculations are carried out using density functional theory (DFT)[63] with projector augmented wave (PAW)[64] basis set as implemented in Vienna Ab-Initio Simulation Package (VASP).[65; 66; 67; 68; 69] A plane wave energy cutoff of 520 eV, \(\Gamma\)-centered 4\(\times\)4\(\times\)4\(\times\)4-mesh, and Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional[50] were employed to perform the geometry optimization. The crystal structure was relaxed with force tolerance criteria of 0.001 eVA\({}^{-1}\). The spin-orbit coupling (soc) effect is included while simulating the electronic and optical properties. Hybrid(HSE06) functional[51] is used to calculate the band gap and band edges which are known to provide a more accurate estimate for the same. Optical absorption spectra are simulated within the independent particle approximation and then the absorption onset value is scissor shifted to HSE06 band gap values. This method makes it possible to accurately assess the SLME for the materials under consideration. The chemical phase diagrams are drawn using Chesta software package.[70] Phonon dispersion is calculated using the density functional perturbation theory (DFPT) using \(\Gamma\)-centered 4\(\times\)4\(\times\)4 k-mesh under the supercell method. The 2nd order force constant is calculated using 2\(\times\)2\(\times\)2 supercells of primitive cells for cubic structures and in similar proportion for other structures. Next, the rotational sum rule is applied using the hiphive package[71] to renormalize the phonon frequencies. Transport calculations are performed using the AMSET code,[57], where we have considered three different scattering mechanisms, namely scattering due to acoustic phonons (ADP), ionized impurities (IMP), and polar optical phonons (POP). Piezoelectric scattering is not included due to the centro-symmetric crystal structure of A\({}_{2}\)BX\({}_{6}\), whereas screening due to free carriers at high defect concentrations is included. This program uses the Boltzmann transport equation's (BTE) momentum relaxation time approximation (MRTA) to determine scattering rates and carrier mobilities. Polaron related parameters were simulated via implementing a temperature dependent Feynman polaron model.[60; 72] Born effective charges and static and high-frequency dielectric tensors were calculated using density functional perturbation theory (DFPT) as implemented in VASP. The effec
tive mass has been calculated using the following equation:
\[m^{*}=3\left[\frac{1}{m_{xx}^{*}}+\frac{1}{m_{yy}^{*}}+\frac{1}{m_{zz}^{*}}\right] \tag{3}\]
where, \(m_{ii}^{*}\) is the effective mass in the \(i\)-th direction (\(i\)=x,y,z).[73; 74; 75; 76]
## VIII Acknowledgments
SG acknowledges financial support from IIT Bombay for research fellowship. AA and MA acknowledges National Center for Photovoltaic Research and Education (NCPRE) funded by Ministry of new renewable energy (MNRE), Government of India, and IIT Bombay for possible funding to support this research.
|
2309.09260 | Visualizing the Zhang-Rice singlet, molecular orbitals and pair
formation in cuprate | The parent compound of cuprates is a charge-transfer-type Mott insulator with
strong hybridization between the Cu $3d_{\mathrm x^2-y^2}$ and O $2p$ orbitals.
A key question concerning the pairing mechanism is the behavior of doped holes
in the antiferromagnetic (AF) Mott insulator background, which is a
prototypical quantum many-body problem. It was proposed that doped hole on the
O site tends to form a singlet, known as Zhang-Rice singlet (ZRS), with the
unpaired Cu spin. But experimentally little is known about the properties of a
single hole and the interplay between them that leads to superconductivity.
Here we use scanning tunneling microscopy to visualize the electronic states in
hole-doped $\mathrm{Ca_2CuO_2Cl_2}$, aiming to establish the atomic-scale local
basis for pair formation. A single doped hole is shown to have an in-gap state
and a clover-shaped spatial distribution that can be attributed to a localized
ZRS. When the dopants are close enough, they develop delocalized molecular
orbitals with characteristic stripe- and ladder-shaped patterns, accompanied by
the opening of a small gap around the Fermi level ($E_{\mathrm F}$). With
increasing doping, the molecular orbitals proliferate in space and gradually
form densely packed plaquettes, but the stripe and ladder patterns remain
nearly the same. The low-energy electronic states of the molecular orbitals are
intimately related to the local pairing properties, thus play a vitally
important role in the emergence of superconductivity. We propose that the
Cooper pair is formed by two holes occupying the stripe-like molecular orbital,
while the attractive interaction is mediated by the AF spin background. | Shusen Ye, Jianfa Zhao, Zhiheng Yao, Sixuan Chen, Zehao Dong, Xintong Li, Luchuan Shi, Qingqing Liu, Changqing Jin, Yayu Wang | 2023-09-17T12:52:02Z | http://arxiv.org/abs/2309.09260v1 | # Visualizing the Zhang-Rice singlet, molecular orbitals and pair formation in cuprate
###### Abstract
We present a new method for the formation of a pair formation in cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for the formation of a pair formation in cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for the formation of a pair formation in cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair pair formation in cuprate is a key ingredient for cuprate. We show that the formation of a pair formation in cuprate is a key ingredient for cuprate. |
2302.14625 | mmSense: Detecting Concealed Weapons with a Miniature Radar Sensor | For widespread adoption, public security and surveillance systems must be
accurate, portable, compact, and real-time, without impeding the privacy of the
individuals being observed. Current systems broadly fall into two categories --
image-based which are accurate, but lack privacy, and RF signal-based, which
preserve privacy but lack portability, compactness and accuracy. Our paper
proposes mmSense, an end-to-end portable miniaturised real-time system that can
accurately detect the presence of concealed metallic objects on persons in a
discrete, privacy-preserving modality. mmSense features millimeter wave radar
technology, provided by Google's Soli sensor for its data acquisition, and
TransDope, our real-time neural network, capable of processing a single radar
data frame in 19 ms. mmSense achieves high recognition rates on a diverse set
of challenging scenes while running on standard laptop hardware, demonstrating
a significant advancement towards creating portable, cost-effective real-time
radar based surveillance systems. | Kevin Mitchell, Khaled Kassem, Chaitanya Kaul, Valentin Kapitany, Philip Binner, Andrew Ramsay, Roderick Murray-Smith, Daniele Faccio | 2023-02-28T15:06:03Z | http://arxiv.org/abs/2302.14625v1 | # MMSENSE: DETECTING CONCEALED WEAPONS WITH A MINIATURE RADAR SENSOR
###### Abstract
For widespread adoption, public security and surveillance systems must be accurate, portable, compact, and real-time, without impeding the privacy of the individuals being observed. Current systems broadly fall into two categories - image-based which are accurate, but lack privacy, and RF signal-based, which preserve privacy but lack portability, compactness and accuracy. Our paper proposes _mmSense_, an end-to-end portable miniaturised real-time system that can accurately detect the presence of concealed metallic objects on persons in a discrete, privacy-preserving modality. mmSense features millimeter wave radar technology, provided by Google's Soli sensor for its data acquisition, and TransDope, our real-time neural network, capable of processing a single radar data frame in 19 ms. mmSense achieves high recognition rates on a diverse set of challenging scenes while running on standard laptop hardware, demonstrating a significant advancement towards creating portable, cost-effective real-time radar based surveillance systems.
Kevin Mitchell \({}^{\dagger}\)\({}^{\lx@sectionsign}\) + Khaled Kassem \({}^{\dagger}\)\({}^{\lx@sectionsign}\) + Chaitanya Kaul \({}^{\ddagger}\) + Valentin Kapitany \({}^{\dagger}\) + Philip Binner \({}^{\dagger}\) + Andrew Ramsay \({}^{\ddagger}\)
Roderick Murray-Smith \({}^{\ddagger}\) + Daniele Faccio \({}^{\dagger}\)+
Footnote †: Equal Contribution
Footnote †: thanks: Equal Contribution
Footnote †: Equal Contribution
Real-time signal processing, mmWave radars, Vision Transformer
## 1 Introduction
Radar solutions developed on Frequency Modulated Continuous Wave (FMCW) technology have shown promising success through their ability to serve as a capable and versatile basis for computational sensing and short range wireless communication systems [1]. Such radars, operating at millimeter wave (mmWave) frequency, can be used for robust gesture generation and recognition [2, 3], and even measure distances with _mm_ accuracy [4]. Furthermore, mmWave radars have the potential to serve as a basis for concealed metallic object detection (e.g. knives, guns etc) which presents a novel and most importantly, privacy-preserving manner of real-time surveillance. The principles of mmWave metal detection rely on the underlying physics of RF waves- radio frequency (RF) waves that fall in the 30-300 GHz range between microwaves and terahertz waves. This frequency band corresponds to wavelengths of 1-10 mm. Within various forms of spectral imaging (e.g. IR, UV), one chooses the waveband that interacts with the object/scene to be imaged, whilst ignoring any obfuscating features. The same is true when detecting metals concealed on humans, where the mmWave waveband is appropriate because the waves from a mmWave RF source pass through the thin layers of clothes, and are reflected highly by the body, plus any hidden objects between the two [5]. Figure 1 depicts this concept empirically. We believe there is a niche in the security field for a portable technology that can screen for illegal metallic weapons, whilst still allowing people to maintain their freedom and privacy in public without security bottlenecks and conventional image-capturing cameras.
This work is not the first to propose mmWave sensing for metal detection. Fusion of mmWave radar and vision data has already helped create accurat
Figure 1: The RF waves from a Google Soli reflect from a person standing 1.5m away from the radar with and without a knife. To mitigate the variations in specular reflection, the subject rotated through 90\({}^{\circ}\) and an average over 60s was used in each scene. The reflected signal received via receiver 2 (of 3) of the Soli is converted into a range profile by computing an FFT over it, and plotted here. The difference between the two range profiles shows the potential of using mmWave radars like the soli, for detecting metallic objects.
tection systems for autonomous driving applications [6]. The comparison in performance of Convolutional and Recurrent Neural Networks (CNNs and RNNs) for metal detection using magnetic impedance sensors has been extensively evaluated in [7]. Their system is compact, however they need to scan the complete scene, and restrict themselves to large metal sheets that are visible in the Line-of-Sight (LOS) of their sensor. [8] use mmWave radar sensors to alert robots to the presence of humans. Of most relevance to our study are [9, 10]- the former study, [9] provides a comprehensive guide on the metal detection capabilities of a \(77-81\)GHz radar chip but do so by comparing intensities of the reflected signal with the intensities of their own model of a body without the presence of the metallic object. Their work does not look at concealed metallic objects but ones that are already visible. [10] created an AI powered metal detection system capable of working in real-time. The system, however, processes their data differently, is prohibitively expensive, and is considerably bulkier than ours. One fundamental advantage our system has over all existing mmWave systems proposed for similar applications is the use of a radar sensor that has the widest Field of View (FOV), smallest form factor and least power consumption amongst its competitors. The Soli is capable of illuminating its surroundings with a \(150^{\circ}\) FOV radar pulse. The lower power consumption of the Soli is due to the fact that it transmits 16 chirps at a pulse repetition frequency of 2000 Hz in a single burst (each burst is transmitted at 25Hz), and then stops transmitting until the next burst to save power [3]. This saves considerable power compared to mmWave radars used in existing works that continuously transmit chirps. Compared to current radar based surveillance systems, our technology does not need to sweep a scene to work, but provides inference on-the-fly by illuminating the scene with RF waves and processing the received signal.
Our work is intended to disrupt the trend of specialised surveillance and imaging systems which are becoming increasingly expensive to install and operate, by using an inexpensive, compact device capable of being mounted in various locations throughout open spaces, which can function in real time. To this end, we present the use of a commercial mmWave radar transceiver to detect the presence of concealed objects on people in real time, in a privacy preserving manner. We focus on high frequency (60GHz), short range (up to 3m) sensing using Google's Soli sensor, primarily due its miniature form factor, low power consumption, portability, and novel sensing characteristics. The Soli is designed for Human-Computer Interaction applications, and has shown success in'macro' radar-based computational imaging tasks (detecting motion, gestures etc). Its application to detecting objects within the movement is unexplored and challenging. The Soli captures a superposition of reflected energy from different parts of a scene using high frequency RF waves: this results in poor lateral spatial resolution, while detecting even the smallest amount of motion. This makes metal detection challenging when there is plenty of movement in the scene i.e., in all practical real world scenarios. To mitigate this challenge, we propose a novel, real-time Vision Transformer model that can exploit semantic sequential relations in the preprocessed radar data and recognize the presence of a concealed metallic object on a person in a scene while ignoring objects such as wallets, keys, belts and mobile phones.
The following are our main contributions: (1) We present mmSense - a novel, end-to-end framework capable of detecting concealed metallic objects on people in a scene using only raw radar data without any human intervention. (2) mmSense is real-time, and can potentially use off-the-shelf commercially available hardware making it easy to replicate and deploy. (3) We open source _mmSense_ including all our datasets and models with the hopes of facilitating further research in this novel field of Artificial Intelligence powered concealed metal detection with mmWave RF signals.
## 2 MmSense
Our mmSense pipeline comprises of three components - a Google Soli radar for data acquisition, an Intel RealSense D435 Time-of-Flight (TOF) camera for visualizing results, and an API capable of acquiring and processing radar data streams in real-time. A single burst of the Soli's transmitted signal is received across 3 antennas. For each RF illumination of the scene by the radar, it receives back a signal \(I\in\mathbb{R}^{P\times C}\) where \(I\) is the imaged scene as perceived by the radar and \(P\) is the number of chirps received by the radar across its \(C(=3)\) antennas. We operate the Soli at a frequency of 60GHz with 1GHz, the maximum permitted bandwidth \(BW\). This gives us a range resolution \(R_{r}=\frac{c}{2BW}=15\)cm, where \(c\) refers to the speed of light. This is the minimum distance that the radar can separate in its LOS between two distinct points. The Soli transmits and receives a series of chirps that are grouped into bursts- we define the number of chirps for our system to be 16, and collect bursts at 25Hz, giving us 25 bursts of Soli data in one second. In this configuration, we can detect up to a maximum range of 9.6m.
The Soli hardware has a corresponding API provided by Google and implemented in C++. We built a C++ application around this API which allows us to interface with the Soli radar in real-time (e.g. selecting a range profile) and receiving the bursts generated from the radar. Our application supports streaming the Soli bursts directly to a Python script using the ZeroMQ1 messaging framework. The bursts are relayed immediately upon being received by the device with no additional buffering, and are ultimately parsed by a Python module which extracts both the parameters associated with the burst (e.g. a hardware-supplied timestamp) and the raw radar data.
Footnote 1: [https://github.com/zeromq/cppzmq](https://github.com/zeromq/cppzmq)
After parsing the raw chirps and their timestamps, we create a Range Doppler [11] transformation of the signal. This is done via a series of Fast Fourier Transforms (FFT) applied to the data. First, we calculate the complex value range profile (RP) of the radar signal. This is done via an FFT of the radar chirps received by the 3 antennas. As the Soli's signal is a superposition of reflections from a scene, the RP data can be interpreted as how well the separate contributions of the RF scatters in the scene are resolved. This gives us an estimate of the geometry of the scene. A Complex Range Doppler (CRD) plot is then calculated as an FFT over each channel of the radar's complex value range profile. Here, the range represents the distance of the object in a scene from the Soli, and the Doppler corresponds to the radial velocity of the object towards the Soli. We use the magnitude of the CRD for our experiments, which is obtained in the following way, \(\texttt{ARD}(r,d)=|\texttt{CRD}(r,d)|\), where \(\texttt{ARD}\) refers to the Absolute
Range Doppler, \(r\) and \(d\) are the range and doppler bins, and \(|\cdot|\) is the absolute value.
The ARD plots are processed using our novel Deep Neural Network, TransDope (Doppler Transformer, figure 3), capable of accurate real time classification of the input data stream. The input is a sequence of 8 ARD frames. TransDope contains two Time Convolution layers pretraind on a large collected dataset of ARD plots, an embedding layer to create a transformer compatible embedding of the convolution features, and transformer layers to learn long range time-dependent semantic features in the data. We first collect a large dataset of ARDs from various scenes with and without concealed metallic objects on actors. We then train a model with two Time Convolution and Max Pooling layers, the output of which is flattened and fed to a classification layer.
Following training, we discard the output layer, and use the two time convolution layers with the pre-trained weights as an initialization. Unlike standard convolutions that apply a weight kernel across all 8 ARDs concurrently, we apply convolutions sequentially to the 8 ARD frames to extract time-dependent features from them, and hence call them time convolutions. We then reshape the output of the last Max Pooling layer to create an embedding of the features. We also add positional encoding to each of the 8 ARD frames to preserve their sequence. Following this, we pass the embedding through 3 TransDope layers that extract semantic dependencies within the ARD's feature representation. These layers are the same as ViTs [12] encoder layers with the exception of having a convolutional layer following the multi head attention layer, instead of the dense layers, to reduce parameter size. We use global average pooling to reduce the transformer layer's features to a vector, which are then passed into the output layer. Our Time Convolution layers have 32 filters and a kernel size of \(3\times 3\). Our transformer layer has an embedding size of 128 and uses 2 attention heads. TransDope contains 0.8 million parameters, and can process a single ARD frame in 19 milliseconds on an Intel i9 8-core CPU. We train our model in TensorFlow 2 for 50 epochs, with a batch size of 8, and a learning rate of \(1e-2\) which inversely decays after every 10 epochs. During inference, we feed 1 batch of 8 ARD frames through the model to get a classification.
Figure 3: TransDope (shown here) processes the radar data stream while preserving time-dependent information throughout.
Figure 2: The mmSense set up is shown in (a). The individual components of the set up, and exemplar visualizations of the imaged scenes are shown for the Soli, and the Intel RealSense D435 in (b) and (c). The knife, gun, and metal sheet, used for the experiments are shown in (d).
## 3 Experiments
To test the accuracy and flexibility of our technique, we collected 6 different scenes with varying characteristics, as depicted in table 1. Data was acquired in 4 instances for each scene: 2 with a metallic object hidden on a person, and 2 without. Each acquisition contains approximately 1500 frames of data. Before training our machine learning model, we collected roughly 15,000 frames of Soli data equally split into the two classes in various scenes, to pre-train the TransDope time convolutions. For each scene, we then trained TransDope, to predict a binary class for each ARD.
We carefully curated different scenes to portray real world situations where our system can be deployed. Scene A was an initial proof of concept where we used the metal sheet as the hidden object to verify the capabilities of our system. We were able to predict the presence of the sheet with 95.1% accuracy. Scene B replicates a crowded expansive scenario, such as an airport terminal. Here we crowded the scene with 5 people walking up to 2m away in radius from the setup. Each person in the scene was carrying everyday objects such as phones, keys, wallets, and belts; only one of these individuals had a knife on their person. Even in such a challenging setting, our system detected the presence of the knife with up to 86.9% accuracy. In Scenes C and D, we observed the effects of changing the hidden object from a knife to a gun. This is important as different objects have different characteristic specular reflections. As seen from our results, the performance of our system held when switching the metallic object from the knife to the gun. Scenes A to D were all open scenes, i.e. the data was not acquired with constricting walls. This results in no multipath RF signals received by the Soli receiving antennas. In Scenes E and F, we tested the effects of keeping our set up in a closed setting and noticed performance decreased. Our results are summarized in table 1 and visualized in figure 4.
**Ablations.** Table 5 shows the effect of varying the amount of sequential information provided to TransDope, as well as varying the various blocks of TransDope. The experiments show that each individual component in our model contributes to performance boosts in terms of metal detection accuracy. We chose 8 ARD frames per sequence as the input to TransDope due to it providing the best accuracy versus execution time. Having multiple sequences of ARDs does further boost performance, but it also doubles (for 2 ARD sequences of 8 ARD frames - 2*8), and quadruples (for \(4*8\)) the execution time for only minor gains in performance.
radar and TOF data to add spatial awareness to the'sensing' ability of _mmSense_ may help to alleviate these drawbacks, and would be a fitting next step to extend this technology.
|
2309.13238 | How to Differentiate between Near Field and Far Field: Revisiting the
Rayleigh Distance | Future wireless systems are likely to adopt extremely large aperture arrays
to achieve higher throughput, wider coverage, and higher spatial resolution.
Conventional wireless systems predominantly operate in the far field (FF) of
the radiation source. However, as the array size increases and the carrier
wavelength decreases, the near field (NF) becomes nonnegligible. Since the NF
and FF differ in many aspects, it is critical to identify their corresponding
regions. In this article, we first provide a comprehensive overview of the
existing NF-FF boundaries, then introduce a novel NF-FF demarcation method
based on effective degrees of freedom (EDoF) of the channel. Since EDoF is
intimately related to channel capacity, the EDoF-based border is able to
characterize key channel performance more accurately than the classic Rayleigh
distance and other representative benchmarks. Furthermore, we analyze the main
features of the EDoF-based NF-FF boundary, provide insights into system design,
and outline the associated challenges and research opportunities. | Shu Sun, Renwang Li, Chong Han, Xingchen Liu, Liuxun Xue, Meixia Tao | 2023-09-23T02:43:28Z | http://arxiv.org/abs/2309.13238v2 | # How to Differentiate between Near Field and Far Field: Revisiting the Rayleigh Distance
###### Abstract
Future wireless communication systems are likely to adopt extremely large aperture arrays and millimeter-wave/sub-THz frequency bands to achieve higher throughput, lower latency, and higher energy efficiency. Conventional wireless systems predominantly operate in the far field (FF) of the radiation source of signals. As the array size increases and the carrier wavelength shrinks, however, the near field (NF) becomes non-negligible. Since the NF and FF differ in many aspects, it is essential to distinguish their corresponding regions. In this article, we first provide a comprehensive overview of the existing NF-FF boundaries, then introduce a novel NF-FF demarcation method based on effective degrees of freedom (EDoF) of the channel. Since EDoF is intimately related to spectral efficiency, the EDoF-based border is able to characterize key channel performance more accurately, as compared with the classic Rayleigh distance. Furthermore, we analyze the main features of the EDoF-based NF-FF boundary and provide insights into wireless system design.
## I Introduction
The deployment of the fifth-generation (5G) networks on a global scale has prompted both academia and industry to turn their attention to the development of the sixth-generation (6G) technologies. The major application scenarios of 5G include enhanced mobile broadband, massive machine type communications, and ultra-reliable and low latency communications, which are expected to support 20 Gbps peak data rate, \(10^{6}\) devices/km\({}^{2}\), and 1 ms end-to-end latency, among others. However, 6G is poised to meet the demands of new and emerging applications, such as immersive cloud extended reality, holographic communications, sensory interconnection, and intelligent interaction. The burgeoning proliferation of wireless devices and their increasingly diverse applications has placed substantially higher demands on 6G in comparison to 5G. These demands encompass key performance metrics such as rate, reliability, latency, mobility, and energy consumption, with expectations typically being 10 to 100 times higher than those of 5G [1]. To achieve these ambitious vision and requirements for 6G, numerous key technologies have been proposed and are actively under development.
Notably, 6G is anticipated to leverage new spectrum resources, including the millimeter-wave (mmWave) and Terahertz (THz) bands. The shorter wavelengths in these bands, compared to the conventional microwave spectrum, allow for the integration of more antennas within a confined space. Consequently, the number of antennas continues to increase, evolving from the standard 64 antennas [2] to the scale of hundreds or even thousands [3]. This technological advancement has given rise to the concept of extremely large aperture arrays (ELAAs), which offer the potential for high beamforming gain, exceptional spatial resolution, and remarkable spectral efficiency [4].
The deployment of ELAA, in conjunction with tinier wavelengths of mmWave/THz bands, accentuates the NF (NF) effect. Generally, the electromagnetic (EM) field can be categorized into three regions: the reactive NF, the radiative NF, and the far field (FF). Since EM waves in the reactive NF are usually localized around the source and do not propagate, the radiative NF is more relevant to wireless communications, hence this article focuses on the radiative NF and refers to this as NF for simplicity. In the NF zone, which occurs when the communication distance is relatively small, the EM wavefronts are spherical. Here, factors such as the non-linear signal phase variations and differences in the traveling distance among different antennas in an ELAA need to be taken into account. The spherical wavefront model (SWM) considers the factors above, and is thus highly accurate, but involves high computational complexity. Conversely, when the communication distance extends to the FF regime, the SWM can be approximated by the planar wavefront model (PWM), since the angles at which the EM wave radiated from or received by each antenna in an array can be regarded as nearly identical. The PWM is a more simplified model, relying primarily on angle information and the number of antennas, making it highly convenient for channel modeling, performance analysis, and system design. As a result, the
Fig. 1: Illustration of wireless communications in the near field and far field.
PWM has been widely studied and implemented in 5G and earlier networks.
Due to the expansive array aperture of an ELAA, the NF region extends significantly, expanding from just a few meters for a conventional antenna array to hundreds of meters based on the classic Rayleigh distance [5] (to be introduced in more detail later on). Consequently, communication devices are more likely to be situated in the NF region, as depicted in Fig. 1, where a user and a vehicle are located in the NF and FF regions, respectively. Importantly, channel modeling in the NF and FF regions differs significantly. Specifically, the direct and reflection/scattering link in the NF should be characterized by the SWM, while the links in the FF can be reasonably approximated by the PWM. The reflectors/scatterers may be located in the NF and/or FF, which makes the propagation environment intricate. The Rayleigh distance, however, may not be the most proper NF-FF boundary in all wireless communication scenarios [6], and other criteria for demarcating the NF and FF are worth exploring.
## II Impact of NF-FF boundary on important aspects of wireless communications
In this section, we discuss the impact of the NF-FF boundary on various aspects of wireless communications systems, including antenna array characterization, propagation channel, and sensing as examples, which underscores the importance of investigating the NF-FF boundary.
### _Impact on Antenna Array Characterization_
Accurate evaluation of the NF and FF regimes of an antenna array is pivotal when conducting characterization and measurements of the array. Most antenna arrays are used in the FF (e.g., communication, radar, etc.). Hence, conducting array calibration in the FF ensures that the array radiates or receives as intended over long distances. NF array calibration is performed to ensure accurate array behavior when interacting with close-by objects or when the target application is within the NF zone. In the NF, interactions between the antenna elements can be much more intricate. These interactions, especially in large and closely spaced arrays, can lead to unpredictable effects like grating lobes or unwanted side lobes. Knowledge of the NF area helps in accurately characterizing these interactions. If characterization measurements are taken in the NF when they are assumed to be in the FF, significant errors can arise.
### _Impact on Propagation Channel_
The impact of the NF-FF boundary can also be observed in channel properties and characteristics such as path loss and spatial non-stationarity. Existing literature has unveiled that channel gain scales differently with distance in the NF and FF, thus it is critical to have accurate knowledge of the NF and FF regions so as to apply the corresponding power scaling laws. Furthermore, spatial non-stationarity, referring to the variability of the wireless channel characteristics over space, is another main difference in the channel. Spatial non-stationarity can occur in both the FF and NF. In the FF, spatial non-stationarity is usually caused by large aperture of the antenna array at the BS, such that users at distinct locations may see different parts of the array and/or that the signals encounter different scatterers. While in the NF, besides the large array aperture, the non-linear phase shifts across array elements caused by the spherical wavefront also contribute to spatial non-stationarity, further complicating the situation. Therefore, it is imperative to have an accurate evaluation of the NF-FF boundary to ease the analysis.
### _Impact on Sensing_
Integrated sensing and communication (ISAC) is a paradigm where sensing and communication functionalities coexist. The NF-FF boundary remains central to shaping ISAC systems. In the NF, the spherical nature of the wavefronts facilitates the capture of fine details in sensing applications, given the wavefront's inherent ability to conform closely to intricate surfaces and interfaces. In the FF, the wavefronts tend to become planar over extended propagation distances. This transformation implies broader-scale sensing and a propensity for long-range, but potentially lower bandwidth, communication. Meanwhile, FF designs typically aims to achieve robust long-range propagation and broader sensing coverage.
## III Existing Research Results on the Boundary between NF and Far Field
As mentioned in the previous sections, reasonable demarcation of the NF and FF is momentous for wireless communication systems with ELAAs. In this section, we review existing research results on NF-FF boundaries, and classify them into two broad categories: multiple-input-single-output (MISO) or equivalently single-input-multiple-output (SIMO) system where only one communication link end is equipped with multiple antennas, and multiple-input-multiple-output (MIMO) system where both communication link ends possess multiple antennas.
### _MISO/SIMO System_
#### Iii-A1 Rayleigh Distance
The classic border between the NF and FF of an antenna is called the _Rayleigh distance_ or _Fraunhofer distance_, \(2D^{2}/\lambda\), where \(D\) denotes the maximum aperture of the antenna and \(\lambda\) the carrier wavelength. The Rayleigh distance is defined from the perspective of the phase error: If the distance between a user and a base station (BS) is larger than \(2D^{2}/\lambda\), then the maximum phase error across the antenna aperture between using the PWM and the SWM is no more than \(\pi/8\). This definition is easily extendable to an antenna array, where \(D\) represents the array aperture. The Rayleigh distance reveals that the boundary between the NF and far field is proportional to the antenna/array aperture length squared, while inversely proportional to the carrier wavelength. It is commonly used to distinguish the near- and far-field regions since it can well capture the phase variations caused by the SWM and has a succinct mathematical expression. However, the Rayleigh distance has weak association with channel performance metrics, such as channel gain and channel capacity, which are important in practical wireless communications systems.
#### Ii-A2 Critical Distance
The Rayleigh distance primarily pertains to the maximum acceptable phase difference among array elements. However, when employing the optimal maximum ratio combining (MRC), the signal phases can be perfectly aligned, eliminating their impact on the received power. In this context, the received power relies solely on the amplitude response of the channel. Consequently, the authors in [7] propound a critical distance \(r_{\text{Critical}}\). This critical distance ensures that the power ratio between the weakest and strongest array elements remains above a specified threshold, effectively leading to an approximation of \(r_{\text{Critical}}\approx 9D\), with \(D\) representing the antenna aperture. Denote the Rayleigh distance as \(r_{\text{Ray}}\), the communication distance \(r\) can be divided into three intervals: 1) For \(r\geq r_{\text{Ray}}\), both the amplitude and phase differences across the antenna array are small, allowing for a safe approximation using the PWM; 2) For \(r_{\text{Critical}}\leq r<r_{\text{Ray}}\), while amplitude differences are relatively small and thus negligible, significant phase differences persist across the antenna array and cannot be disregarded; 3) For \(r<r_{\text{Critical}}\), both amplitude and phase variations are substantial. Consequently, the channel should be modeled utilizing the SWM.
#### Ii-A3 Uniform-Power Distance
The critical distance in [7] is extended to the uniform-power distance (UPD) in [8] by additional considerations of UPA structure and the variation of the projected aperture across the antenna array. The UPD maintains the same definition as in [7, 8], defining a distance beyond which the power ration between the weakest and strongest array elements is no smaller than a certain threshold \(\Gamma_{\text{th}}\). When the Rx is positioned at the center of and aligned perpendicularly to the UPA, the UPD can be explicitly expressed as \(\sqrt{\frac{\Gamma_{\text{th}}^{2/3}}{1-\alpha^{2/3}}}\frac{L_{d}}{2}\), where \(L_{d}\) denotes the diagonal dimension of the UPA. For other incident angles and positions, the UPD can be obtained numerically.
#### Ii-A4 Effective Rayleigh Distance
The authors in [7] and [8] adopt the optimal MRC to eliminate the influence of the signal phases. However, due to the inherent challenges in achieving perfect channel estimation, and the MRC may not completely cancel out the phases. Therefore, from a beamforming perspective, the authors in [9] propose an effective Rayleigh distance \(R_{\text{eff}}\), beyond which the normalized beamforming gain under the far field assumption is no less than 95%. The effective Rayleigh distance is given by \(R_{\text{eff}}=0.367\cos^{2}(\theta)\frac{2D^{2}}{4}\), where \(\theta\) is the incident angle, \(D\) is the antenna aperture, and \(\lambda\) is the carrier wavelength. The effective Rayleigh distance can be seen as a correction for the Rayleigh distance to ensure the beamforming gain when adopting the PWM.
#### Ii-A5 Bjornson Distance
The authors in [10] consider a UPA structure consisting of \(N\) identical antennas, each with an area denoted as \(A\). From a strict electromagnetic perspective, they introduce a normalized antenna array gain, representing the received power relative to the power obtained in the far field under the PWM. Under this modeling, the Rayleigh distance can be re-expressed as \(d_{\text{Ray}}=2NL^{2}/\lambda\), where \(L=\sqrt{2A}\) denotes the diagonal length of each antenna element. Then, they propose the Bjornson distance as \(d_{b}=2L\sqrt{N}\). Notably, the Bjornson distance exhibits growth proportional to the square root of \(N\), in contrast to the linear growth with \(N\) seen in the Rayleigh distance. At least 95% of the maximum gain can be achieved when the communication distance is no less than the Bjornson distance.
#### Ii-A6 Equi-Power Line
Considering a ULA structure with only a line-of-sight (LoS) path, the received power under the MRC from a point source is solely determined by the amplitude response and is independent of the signal phase. The authors in [11] define a ratio of the received power obtained with the SWM to that obtained with the PWM. When the communication distance goes to infinity, the ratio tends towards one. They propose an equi-power line, which represents this ratio reaches a pre-defined threshold. The closed-form analytical expression for this ratio leads to an equi-power line located at approximately \(2.86D\) with \(D\) representing the array aperture, when the source aligns perpendicularly with the middle of the ULA. For other source angles, numerical methods are employed to determine the equi-power line. Interestingly, the study reveals that within the incident angle range of \([-\pi/6,\pi/6]\), the received power under PWM consistently serves as an upper bound for that under SWM. Beyond this range, as the distance increases, the power corresponding to PWM initially functions as an upper bound but subsequently transforms into a lower bound for the power under SWM.
#### Ii-A7 Equi-Power Surface
Considering a uniform circular planar array (UCPA) structure with only an LoS path, the received power under the MRC from a point source only relies on the amplitude response. The authors in [11] define a normalized received power, which signifies the relative received power obtained by the SWM compared to the PWM. Subsequently, they compound an euqi-power surface at which the normalized received power attains a predefined threshold. Utilizing a derived closed-form expression, the equi-power line is situated at approximately \(3.96D\) with \(D\) representing the length of the side of the UCPA, when the source is perpendicular to the center of the UCPA. When the source is located in other angles, the equi-power line can be obtained numerically.
### _MIMO System_
For the MIMO system, a widely recognized NF-FF boundary is the extended version of the Rayleigh distance defined as \(2\left(D_{\text{T}}+D_{\text{R}}\right)^{2}/\lambda\), where \(D_{\text{T}}\) and \(D_{\text{R}}\) denote the maximum array aperture at the transmitter (Tx) and receiver (Rx), respectively.
#### Ii-B1 Threshold Distance in [12]
The authors in [12] propose a threshold distance below which the capacity of the SWM surpasses that of the PWM by at least a factor of 1.5, given a specific array size. In practical terms, this threshold distance marks a point where the capacity underestimation error when using the PWM is 50%. By using the empirical fitting techniques, they derive the threshold distance as \(4L_{\text{T}}L_{\text{R}}\cos(\theta_{\text{T}})\cos(\theta_{\text{R}})\), where \(L_{\text{T}}\) (\(L_{\text{R}}\)) is the array size at the Tx (Rx) in units of wavelength, and \(\theta_{\text{T}}\) (\(\theta_{\text{R}}\)) is the rotated angle at the Tx (Rx). Note that this threshold distance is in units of wavelength. This threshold distance can also be regarded as the 0.225 dB-down beamwidth distance of the array. When considering the half-power beamwidth (3 dB) of the array, the threshold distance can be calculated as \(1.13L_{\text{T}}L_{\text{R}}\cos(\theta_{\text{T}})\cos(\theta_{\text{R}})\).
#### Ii-B2 Threshold Distance in [13]
Building upon an approximation of the largest eigenvalue of LoS MIMO channels employing uniform linear arrays (ULA) structure, the authors in [13] obtain a threshold distance at which the ratio of the largest eigenvalues, as given by SWM and PWM, reaches a predefined threshold. This threshold distance is given by \(\tau_{g}d_{\text{T}}d_{\text{R}}\cos(\theta_{\text{T}})\cos(\theta_{\text{R}})/\lambda\), where \(d_{\text{T}}\) (\(d_{\text{R}}\)) is the antenna spacing at the Tx (Rx), \(\theta_{\text{T}}\) (\(\theta_{\text{R}}\)) is the rotated angle at the Tx (Rx), and \(\tau_{g}\) is an auxiliary variable which is dependent on the antenna number of the Tx and Rx. The exact value of \(\tau_{g}\) can be found in [13]. This approach is relatively accurate when the antenna number is small. However, when the antenna number is large, the exact largest eigenvalue cannot be obtained. Moreover, this approach solely considers the largest eigenvalue, which imposes limitations on the accuracy of the threshold distance.
#### Ii-B3 Effective Multiplexing Distance
Considering spatial multiplexing, the authors in [14] propose an effective multiplexing distance \(D_{\text{max}}^{(m)}\). This distance represents the farthest range at which the channel can efficiently accommodate
independent spatial streams simultaneously at a specific signal-to-noise ratio (SNR). The effective multiplexing distance can be approximated by \(D_{\max}^{(m)}\simeq D_{\mathrm{T}}D_{\mathrm{R}}/(\lambda\widetilde{S}_{\mathrm{ asy},m}^{*}(\gamma))\), where \(D_{\mathrm{T}}\) (\(D_{\mathrm{R}}\)) is the array aperture at the Tx (Rx), \(\lambda\) is the carrier wavelength, \(\gamma\) is the SNR requirement, and \(\widetilde{S}_{\mathrm{asy},m}^{*}(\gamma)\) is an auxiliary variable that accounts for the threshold at which the MIMO system can support \(m\) independent spatial streams under the given SNR \(\gamma\). The specific values of \(\widetilde{S}_{\mathrm{asy},m}^{*}(\gamma)\) can be determined through numerical simulations. Denoting the Rayleigh distance as \(r_{\mathrm{Ray}}\), the communication distance \(D\) can be categorized into three intervals: 1) For \(D\in(0,r_{\mathrm{Ray}}]\), the MIMO system can consistently achieve the full multiplexing gain by adjusting the orientations of the antennas; 2) For \(D\in(r_{\mathrm{Ray}},D_{\max}^{(m)})\), the MIMO system can effectively support \(m\) streams at the specified SNR \(\gamma\); 3) For \(D\in(D_{\max}^{(m)})\), the channel exhibits only one dominant eigenvalue, thus supporting a single stream.
## IV Proposed Demarcation Based on Effective Degrees of Freedom
In this section, we first introduce a new NF-FF demarcation criterion focusing on the effective degrees of freedom (EDoF) of the channel, then investigate the characteristics of the EDoF-based boundary in typical application scenarios and its implications for system performance.
### _Demarcation Criterion_
For a MIMO system, channel capacity is a commonly utilized criterion to evaluate the system performance, which represents the information-transmission ability of a communication system. Channel capacity is closely related to the EDoF of the channel, where the EDoF represents the equivalent number of single-input-single-output sub-channels1. An information-theory-originated definition of the EDoF is \(\left(\frac{\mathrm{tr}(\mathbf{R})}{\|\mathbf{R}\|_{\mathrm{F}}}\right)^{2}\)[15], where \(\mathbf{R}\) denotes the spatial correlation matrix of the MIMO channel matrix, while \(\mathrm{tr}(\cdot)\) and \(\|\cdot\|_{\mathrm{F}}\) denote the trace and Frobenius norm of a matrix, respectively. In order to characterize the channel performance (e.g., channel capacity) more accurately, we introduce a novel NF-FF demarcation approach based upon the EDoF. Specifically, the boundary \(r_{\mathrm{Th}}\) between the NF and FF is defined such that the EDoF
Fig. 2: Antenna array configurations considered in this article. (a) Both the BS and user are equipped with a ULA, where \(\alpha\) denotes the angle between the center of the BS ULA and the center of the user ULA, and \(\beta\) is the angle of the user ULA with respect to the positive Z-direction within the YZ-plane. (b) The BS and user are equipped with a URA and a ULA, respectively, where \(\theta\) represents the angle of the user ULA with respect to the positive Z-direction and parallel with the XZ-plane. (c) The BS and user are equipped with an arched “URA” and a ULA, respectively, where the edge of the URA along the X-direction is acted according to a certain curvature radius \(R\), while \(\theta\) represents the angle of the user ULA with respect to the positive Z-direction and parallel to the XZ-plane. It is further postulated that the horizontal edge of the URA form part of a semicircular arc with a curvature radius of \(R\), hence the semicircular cylindrical surface can be regarded as a special case of the arched URA architecture with the horizontal edge being a complete semicircular arc.
of the MIMO system equals a pre-defined threshold \(\eta\) at \(r_{\text{Th}}\) under the SWM. Since the EDoF is 1 under the PWM when only an LoS path exists between the Tx and Rx, the threshold value of EDoF \(\eta\) can be set to a value slightly larger than 1. This EDoF demarcation criterion can capture the differential spatial characteristics of the MIMO system between the SWM and PWM, and is more explicitly related to key performance indicators, such as the capacity and multiplexing capability, of the channel.
### _Case Studies_
In the sequel, we will unveil the main behaviors of the proposed EDoF-based NF-FF boundary and how it differs from the classic Rayleigh distance through some examples. We consider point-to-point MIMO systems, as illustrated in Fig. 2, where the user is equipped with a ULA, and the antenna array at the BS can be a ULA or a URA. It is noteworthy that besides the conventional ULA and URA, we also take into account an arched "URA" architecture (shown in Fig. 2(c)), in which one of the edges of the URA is bent according to a certain curvature radius. This is motivated by the advanced conformal antenna array whose surface can be flexibly bent based on the shape of the object it situates on. Detailed settings are described in the figure caption.
Based upon the aforementioned EDoF formula \(\left(\frac{\text{tr}(\mathbf{R})}{\|\mathbf{R}\|_{\text{F}}}\right)^{2}\), we conduct mathematical derivations (not shown in this article) for the three types of MIMO systems in Fig. 2 to obtain accurate or approximate expressions of the NF-FF boundary \(r_{\text{Th}}\), and provide the results in Table II2. Note that these results are obtained under the paraxial approximation in optics, i.e., the distance between the Tx and Rx arrays is significantly larger than their apertures, which is practical in many real-world communication systems. For the ULA-to-ULA scenario, it is evident from the expression of \(r_{\text{Th}}\) that unlike the
Rayleigh distance which relies on the square of the sum of the Tx array aperture \(L_{\mathrm{T}}\) and the Rx array aperture \(L_{\mathrm{R}}\), the EDoF-based boundary \(r_{\mathrm{Th}}\) is related to the product of \(L_{\mathrm{T}}\) and \(L_{\mathrm{R}}\), as well as the numbers of array elements at the Tx and Rx \(N_{\mathrm{T}}\) and \(N_{\mathrm{R}}\), respectively. More specifically, \(r_{\mathrm{Th}}\) is proportional to \(\pi L_{\mathrm{T}}L_{\mathrm{R}}\) and inversely proportional to \(\left(N_{\mathrm{T}}-1\right)(N_{\mathrm{R}}-1)\lambda\), indicating that \(r_{\mathrm{Th}}\) will alter if the number of array elements at either the Tx or Rx changes, even if both array lengths remain the same. Furthermore, fewer array elements correspond to a larger boundary distance, thus the boundary for an array with two elements serves as an upper bound while that for an infinite number of array elements acts as a lower bound. For the URA-to-ULA scenario, assuming the ULA is equipped at the Rx, \(r_{\mathrm{Th}}\) is still proportional to \(\pi L_{\mathrm{R}}\) and inversely proportional to \((N_{\mathrm{R}}-1)\lambda\), whereas its relation with the size and number of elements of the URA is more complicated. When fixing the horizontal (vertical) side length \(L_{\mathrm{T}_{x}}\) of the URA, \(r_{\mathrm{Th}}\) is proportional to the vertical (horizontal) side length \(L_{\mathrm{T}_{x}}\), and similar rules apply to the number of elements \(N_{\mathrm{T}_{x}}\) and \(N_{\mathrm{T}_{x}}\)along the horizontal and vertical directions; when both side lengths of the URA vary, \(r_{\mathrm{Th}}\) depends on the product of the sinusoidal functions containing \(\frac{L_{\mathrm{T}_{x}}}{N_{\mathrm{T}_{x}}-1}\) and \(\frac{L_{\mathrm{T}_{x}}}{N_{\mathrm{T}_{x}}-1}\). In addition, it is worth noting that when \(\theta=0^{-}\) (\(90^{\circ}\)), \(r_{\mathrm{Th}}\) becomes independent of the side length and number of elements along the horizontal (vertical) direction of the URA. In other words, \(r_{\mathrm{Th}}\) is proportional to the effective URA aperture \(\sqrt{L_{\mathrm{T}_{x}}^{2}\sin^{2}\left(\theta\right)+L_{\mathrm{T}_{x}}^{ 2}\cos^{2}\left(\theta\right)}\) projected onto the direction that the ULA lies in. For the arched-URA-to-ULA case, the mathematical derivation shows that \(r_{\mathrm{Th}}\) is approximately identical to that for the URA-to-ULA setting if the curvature radius \(R\) is obviously larger than the arc length.
Next, we provide quantitative analysis on the EDoF-based
Fig. 3: Characterization of the EDoF-based NF-FF boundary under various circumstances. (a) Comparison of different NF-FF boundaries, where the Rx is equipped with a two-element ULA with a length of 0.05 m, the horizontal side length of the Tx URA is fixed to 0.049 m, the orientation angle \(\theta\) in Fig. 2(b) is set to \(10^{\circ}\), and the X-axis values denote the maximum aperture of the Tx ULA or URA. (b) EDoF-based boundary for the arched URA in Fig. 2(c), in which the approximated values are calculated according to the URA results in Table II, and the simulated values are obtained via numerical simulation using the actual arched array architecture. The vertical length of the arched URA is \(0.2\) m, the length of the ULA is \(0.05\) m, and the orientation angle \(\theta\) of the ULA is \(45^{\circ}\). (c) Variation of the EDoF-based boundary with the position and orientation angles \(\alpha\) and \(\beta\) depicted in Fig. 2(a). The lengths of the Tx and Rx ULAs are 1 m and 0.1 m, respectively, and the numbers of elements of the Tx and Rx ULAs are 128 and 2, respectively. (d) Variation of the EDoF-based boundary with the orientation angle \(\theta\) and the vertical length of the URA depicted in Fig. 2(b). The aperture of the URA is 0.5 m, and the numbers of elements along the horizontal and vertical directions of the URA are both eight. The ULA has two elements and its length is 0.1 m.
NF-FF boundary values to gain more intuitive insights. In all the simulations, the EDoF threshold \(\eta\) is set to 1.01, and the carrier frequency is 100 GHz. Fig. 3 demonstrates EDoF-based NF-FF boundary values under a variety of circumstances, where the simulation settings are detailed in the figure caption. Several relevant observations can be drawn from Fig. 3: First, the Rayleigh distance can be smaller or larger than the EDoF-based boundary, as delineated in Fig. 3(a), implying that it may underestimate or overestimate the NF region from the viewpoint of supportable spatial streams. Second, since the length of the Rx ULA is fixed, the EDoF-based boundary grows linearly with the Tx array aperture for the ULA-to-ULA scenario, whereas the increasing trend is non-linear with respect to the URA aperture for the URA-to-ULA case, which is consistent with the results in Table II. Third, Fig. 3(b) reveals that the analytical expression of the EDoF-based boundary for the arched URA is sufficiently accurate when the curvature radius \(R\) is no smaller than \(3L_{\mathrm{Tx}}/\pi\), or roughly \(L_{\mathrm{Tx}}\), i.e., the curvature radius is no smaller than the arc length. Moreover, it is seen from Fig. 3(c) that the boundary distance reaches its maximum when the two ULAs are normally facing each other with the line of centers perpendicular to them, and arrives at its minimum when the line of centers of the two ULAs is aligned with one of the ULAs. Additionally, the boundary distance increases with the orientation angle \(\theta\) in the URA-to-ULA scenario when the vertical length of the URA is small, and behaves oppositely for a large vertical length, due to the fact that the boundary is proportional to the effective URA aperture \(\sqrt{L_{\mathrm{Tx}}^{2}\sin^{2}\left(\theta\right)+L_{\mathrm{T}_{c}}^{2} \cos^{2}\left(\theta\right)}\) as analyzed above.
### _Implications for System Design_
#### Iv-C1 Channel Capacity
The NF-FF boundary has a crucial impact on channel capacity, since it determines the applicable regions of different electromagnetic wavefronts which in turn dictate the channel model. It has been shown in Fig. 3(a) that the classic Rayleigh distance may underestimate or overestimate the NF zone depending on the antenna array configurations. Overestimation of the NF range can give rise to unnecessary computational complexity using the SWM, while while underestimation may cause prediction error of the channel capacity. For instance, Fig. 4 depicts the estimation error of the channel capacity for a ULA-to-ULA wireless communication scenario, where the estimation error is computed by comparing the channel capacity using the PWM with that using the SWM at the corresponding boundary distance. As evident from Fig. 4, the Rayleigh distance leads to large capacity errors for a wide range of SNR values, and the capacity error can reach over 35% which is non-negligible and will seriously affect the system deployment strategies.
#### Iv-C2 Multiple Access
The FF channel is dependent on the angle, making it unable to distinguish users located in the same direction. In contrast, the NF channel can discern both the angle and distance, allowing it to focus on a specific location even when multiple users share the same direction. To gain more insights into how to NF-FF boundary influences multiple access, we can regard the two-element ULA in the aforementioned simulations as two users each with a single antenna. In this sense, the boundary distance in Fig. 3(c) and Fig. 3(d) implies the distance at which the spatial channels between the two users are approximately fully correlated. As illustrated by Fig. 3(c) and interpreted in the previous subsection, the boundary distance reaches its minimum when the two users situate in the same direction from the viewpoint of the center of the BS antenna array, which is as expected since they share the direction. Nevertheless, the boundary distance is non-zero, indicating that their spatial channels are still distinguishable up to a certain distance from the BS. On the other hand, the two users are most easily discernible when their relative directions are the farthest apart (corresponding to the largest boundary distance in Fig. 3(c)) given a fixed distance between them. Furthermore, it can be inferred from Fig. 3(d) that the spatial channels between two users are less correlated when the line connecting them is parallel to the longer side of a URA. Consequently, if solely aiming to serve more users in the NF, it is beneficial to place the URA at the BS such that its longer side is parallel to the direction with most users. In practice, of course, other factors need also to be taken into account in system design. It is apparent from the foregoing examples that knowledge of how the NF-FF boundary varies with the azimuth and elevation angles is helpful in designing adaptive algorithms for channel estimation, beamforming/beam-focusing, and beam management, so as to sense and serve different users more efficiently based on their locations and relative orientations.
## V Conclusion
In this article, we discussed the importance of identifying the NF regime for ELAAs, summarized existing NF-FF boundaries, and propounded a novel NF-FF demarcation scheme based on the EDoF of the MIMO channel. We investigated the key influencing factors and behaviors of the EDoF-based boundary for various antenna array configurations, including conformal antenna arrays, and analyzed the implications for
Fig. 4: Estimation error of the channel capacity at the Rayleigh distance and the EDoF-based boundary. Both the Tx and Rx are equipped with a two-element ULA, whose lengths are both 0.05 m, and the Rayleigh distance and the EDoF-based boundary herein are 6.67 m and 18.54 m, respectively.
system design. The proposed NF-FF boundary is able to more accurately characterize system performance indicators such as channel capacity, as compared to the classic Rayleigh distance, and can provide more insights into wireless system deployment.
|
2309.14557 | Disruption Detection for a Cognitive Digital Supply Chain Twin Using
Hybrid Deep Learning | Purpose: Recent disruptive events, such as COVID-19 and Russia-Ukraine
conflict, had a significant impact of global supply chains. Digital supply
chain twins have been proposed in order to provide decision makers with an
effective and efficient tool to mitigate disruption impact. Methods: This paper
introduces a hybrid deep learning approach for disruption detection within a
cognitive digital supply chain twin framework to enhance supply chain
resilience. The proposed disruption detection module utilises a deep
autoencoder neural network combined with a one-class support vector machine
algorithm. In addition, long-short term memory neural network models are
developed to identify the disrupted echelon and predict time-to-recovery from
the disruption effect. Results: The obtained information from the proposed
approach will help decision-makers and supply chain practitioners make
appropriate decisions aiming at minimizing negative impact of disruptive events
based on real-time disruption detection data. The results demonstrate the
trade-off between disruption detection model sensitivity, encountered delay in
disruption detection, and false alarms. This approach has seldom been used in
recent literature addressing this issue. | Mahmoud Ashraf, Amr Eltawil, Islam Ali | 2023-09-25T22:03:09Z | http://arxiv.org/abs/2309.14557v1 | # Disruption Detection for a Cognitive Digital Supply Chain Twin Using Hybrid Deep Learning
###### Abstract
**Purpose:** Recent disruptive events, such as COVID-19 and Russia-Ukraine conflict, had a significant impact of global supply chains. Digital supply chain twins have been proposed in order to provide decision makers with an effective and efficient tool to mitigate disruption impact.
**Methods:** This paper introduces a hybrid deep learning approach for disruption detection within a cognitive digital supply chain twin framework to enhance supply chain resilience. The proposed disruption detection module utilises a deep autoencoder neural network combined with a one-class support vector machine algorithm. In addition, long-short term memory neural network models are developed to identify the disrupted echelon and predict time-to-recovery from the disruption effect.
**Results:** The obtained information from the proposed approach will help decision-makers and supply chain practitioners make appropriate decisions aiming at minimizing negative impact of disruptive events based on real-time disruption detection data. The results demonstrate the trade-off between disruption detection model sensitivity, encountered delay in disruption detection, and false alarms. This approach has seldom been used in recent literature addressing this issue.
**Keywords: Digital Twin, Deep Learning, Machine Learning, Supply Chain Management, Supply Chain Resilience, Disruption Detection**
## 1 Introduction
Local and global crises severely impact global supply chains. Hurricane Katrina in 2006, the Japanese tsunami in 2011, COVID-19 in late 2019, and the Suez Canal blockage in 2021 disrupted the flow of goods and materials in global supply chains. Recent power outages and industrial shutdowns in China have affected many supply chains with limited supply and long delays (Feng, 2021). Furthermore, climate change risks may evolve and disrupt global supply chains through natural disasters, resulting in plant shutdowns and disruptions to mining operations and logistics (Ghadge, Wurttmann, & Seuring, 2019). Finally, the Russia-Ukraine conflict is expected to adversely impact many supply chains worldwide and global logistics (Eshkenazi, 2022).
In 2021, 68% of supply chain executives reported constantly facing disruptive events since 2019 (Gartner, 2022). Therefore, proper disruption management is vital to minimise negative disruption impacts and avoid supply chain collapse. Supply chain disruption management refers to the approaches and policies adopted to recover from unexpected disruptive events which cause a high adverse impact on supply chain performance and are characterised by low occurrence frequency (Ivanov, 2021). Some disruptive events, such as supplier unavailability, can have a prolonged impact during the post-disruption period due to delayed orders and backlogs. Supply Chain Resilience (SCR) refers to the supply chain's ability to withstand, adapt, and recover from disruptions to fulfil customer demand and maintain target performance (Hosseini, Ivanov, & Dolgui, 2019). For dynamic systems, SCR is a performance-controlled systemic property and goal-directed. In other words, disruption absorption allows for maintaining the intended performance in the event of a disruption. At the same time, the feedback control embodied in recovery control policies makes SCR self-adaptable (Ivanov, 2021).
SCR considers disturbances in the supply chain, such as supplier unavailability and disruption impact on supply chain performance. Moreover, SCR seeks to restore normal operations by adopting recovery policies. As a result, SCR guarantees the firm's survival after severe adverse events. Resilience may be realised by (1) redundancies, such as subcontracting capabilities and risk mitigation stocks, (2) recovery flexibility to restore regular performance, and (3) end-to-end supply chain visibility (Ivanov, 2021).
With the evolution of Industry 4.0, many businesses were encouraged to carry out the transition towards digitalisation. Gartner (2018) predicted that by 2023, at least half of the world's largest corporations would be employing Artificial Intelligence (AI), advanced analytics, and the Internet of Things (IoT) in supply chain operations. Big Data Analytics (BDA) advancements
and real-time data availability offered by IoT technologies resulted in the emergence of Digital Twins (DTs). A DT is a digital representation of a real-world physical system (Qamsane et al., 2019).
A Digital Supply Chain Twin (DSCT), as defined by Ivanov, Dolgui, Das, and Sokolov (2019), is "a computerised model of the physical system representing the network state for any given moment in real-time". The DSCT imitates the supply chain, including any vulnerability, in real-time. This real-time representation helps improve SCR through an extensive end-to-end supply chain visibility based upon logistics, inventory, capacity, and demand data (Ivanov and Dolgui, 2020).
DSCTs can improve SCR, minimise risks, optimise operations, and boost performance (Pernici et al., 2020). DTs provide up-to-date real-time data which reflects the most recent supply chain state. Real-time data allows for the early detection of supply chain disruptions and rapid response through recovery plans. Moreover, optimisation engines integration with DTs enable making the most cost-effective operational decisions (Frazzon, Freitag, and Ivanov, 2020).
The concept of Cognitive Digital Twins (CDTs) has emerged during the past few years which refers to the DTs that possess additional capabilities, such as communication, analytics, and cognition (Zheng, Lu, and Kiritsis, 2021). CDTs have been firstly introduced in the industry sector in 2016, followed by several attempts to provide a formal definition of CDTs (Zheng et al., 2021). For instance, Lu (2020) defined CDTs as "DTs with augmented semantic capabilities for identifying the dynamics of virtual model evolution, promoting the understanding of inter-relationships between virtual models and enhancing the decision-making". CDTs which utilise machine learning can sense and detect complex and unpredictable behaviours. Therefore, a Cognitive Digital Supply Chain Twin (CDSCT) permits disruption detection in the supply chain and quick deployment of recovery plans in real-time upon disruption detection.
Motivated by recent global supply chain disruptions, digital transformation efforts, and absence of operational frameworks that utilize CDSCT for disruption detection and time-to-recovery prediction from the literature, this paper introduces a framework to help enhance Supply Chain Resilience (SCR) through decision support by adopting Digital Supply Chain Twins (DSCTs), building upon the introduced conceptual framework by Ivanov and Dolgui (2020) Additionally, the adoption of data-driven AI models in DSCTs enable monitoring supply chain state that help detect supply chain disruptions in real-time and optimising recovery policies to recover from these disruptions. Real-time disruption detection enables the decision-makers to respond quickly to disruptions through early and efficient deployment of recovery policies. AI models play an important role in discovering abnormal patterns in data. As a result, this paper introduces a hybrid deep learning approach for disruption detection in a make-to-order three-echelon supply chain. The proposed approach is presented within a CDSCT framework to improve SCR through
real-time disruption detection. The introduced approach allows the decision-makers to identify the disrupted echelon and obtain an estimate of the Time-To-Recovery (TTR) from a disruptive event upon disruption detection.
The remainder of this paper is organised as follows. Section 2 reviews the relevant literature. Then, section 3 introduces and describes the problem at hand. Afterwards, section 4 demonstrates pertinent machine learning concepts, followed by section 5, demonstrating the development steps. The results are shown in section 6 followed by section 7, demonstrating the managerial implications. Finally, section 8 provides concluding remarks, current research limitations, and directions for future work.
## 2 Review of literature
### Supply chain resilience
Many scholars proposed several signal-based approaches to evaluate SCR (Chen & Miller-Hooks, 2012; Falasca, Zobel, & Cook, 2008; Melnyk, Zobel, Macdonald, & Griffis, 2013; V.L.M. Spiegler, Naim, & Wikner, 2012; Torabi, Baghersad, & Mansouri, 2015). The proposed approaches involved simple models, such as simple aggregation models, and sophisticated models, such as deep learning. An aggregation-based approach was introduced to evaluate operational SCR (Munoz & Dunbar, 2015). A single evaluation metric across multiple tiers in a multi-echelon supply chain was developed by aggregating several transient response measures. The transient response represents the change in supply chain performance due to a disruptive event. The transient response measures evaluated supply chain performance across multiple dimensions. These dimensions were (1) TTR, (2) disruption impact on performance, (3) performance loss due to disruption, and (4) a weighted-sum metric to capture the speed and shape of the transient response. This approach could explain the performance response to supply chain disruptions better than individual dimensions of resilience at the single-firm level.
A system dynamics-based approach was proposed to quantify SCR at a grocery retailer (V. Spiegler, Potter, Naim, & Towill, 2015). SCR was evaluated based on the supply chain response to the dynamic behaviour of stock and shipment in a distribution centre replenishment system. Considering the inherent non-linear system behaviour eliminates preliminary analysis of non-linearity effects which helps simulate complex supply chains (Ivanov, Sethi, Dolgui, & Sokolov, 2018).
A hierarchical Markov model was introduced to integrate advance supply signals with procurement and selling decisions (Gao, Yang, Zhang, & Luo, 2017). The proposed model captured essential features of advance supply signals for dynamic risk management. In addition, the model could be used to make a signal-based dynamic forecast. The strategic relationship between signal-based forecast, multi-sourcing, and discretionary selling was revealed. However, future supply volatility and variability are expected to affect the future supply forecast. The findings revealed a counter-intuitive insight. A
model that disregards both volatility and variability of the uncertain future supply might outperform the one that considers the variability of the uncertain future supply. Finally, a signal-based dynamic supply forecast was recommended under considerable supply uncertainty and a moderate supply-demand ratio.
Deep learning models for enhancing SCR could outperform the classical models. A deep learning approach was introduced based on Artificial Neural Networks (ANNs) (Radosavljevic, Lucanin, Ruger, & Golubovic, 2021). This approach aims at identifying disruptions related to temperature anomalies in the cold supply chain during transport. The ANN-based model was compared to another approach based on BDA and mathematical modelling. Based on a simulation model and a real-world case, the ANN-based model outperformed the other model based on BDA and mathematical modelling.
Moreover, hybrid deep learning models could outperform deep learning models for anomaly detection. A hybrid-deep learning approach was presented to detect anomalies in a fashion retail supply chain (Nguyen, Tran, Thomassey, & Hamad, 2021). The hybrid deep learning model involved a deep Long-Short term memory (LSTM) autoencoder and classic machine learning to extract meaningful information from the data. Then, semi-supervised machine learning was applied in the shape of a One-Class Support Vector Machine (OCSVM) algorithm to detect sales anomalies. Based on a real case for a company in France, the results showed that hybrid approaches could perform better than deep learning-based approaches.
### Digital supply chain twins for enhancing supply chain resilience
Several studies extended the application of DSCTs in many aspects to support decision-making and enhance SCR. A machine learning approach was introduced to improve SCR through resilient supplier selection in a DT-enabled supply chain (Cavalcante, Frazzon, Forcellini, & Ivanov, 2019). The introduced approach could analyse the supplier performance risk profiles under uncertainty through data-driven simulation for a virtual two-echelon supply chain. The results revealed that combining machine learning-based methods with DSCT could enhance SCR, especially when redesigning the supply network.
A notion of DSCT to support decision-making and improve SCR was explained in (Ivanov et al., 2019). The interrelationships between supply chain digital technology and disruption risk effects in the supply chain were investigated. Then, a framework for risk management in supply chain management was introduced. The results indicated that future decision support systems would utilise DSCTs and digital technologies, such as IoT and BDA. As a result, the available real-time data could provide information regarding the scope and impact of disruptions. The feedback from DSCTs could be used to restore the pre-disruption performance by testing different policies. The integration between BDA and a DT for an automotive supply chain was introduced
to support decision-making and adapt to new scenarios in real-time (Vieira, Dias, Santos, Pereira, & Oliveira, 2019).
Another framework based on real-time disruption detection was presented to support decision-making for a DSCT for disruption risk management (Ivanov & Dolgui, 2020). This framework would enable efficient deployment of recovery policies, reliable disruption scenarios creation for supply chain risk analysis, and revealing the connections between risk data, disruption modelling, and performance evaluation.
The weaknesses in SCR modelling were highlighted in the face of foreseeable disruptions (Golan, Trump, Cegan, & Linkov, 2021). The findings showed that DSCTs could better allow decision-makers to evaluate efficiency/resilience trade-offs. Furthermore, during the post-disruption phase, DTs can help optimise system performance.
Corresponding to the COVID-19 impact on global supply chains, DSCTs were used to examine the effect of a real-life pandemic disruption scenario on SCR for a food retail supply chain (Burgos & Ivanov, 2021). The results uncovered the underlying factors that affect supply chain performance, such as pandemic intensity and customer behaviour. The findings assured the importance of DSCTs for building resilient supply chains.
### Cognitive digital twins
Many scholars introduced different architectures and implementations for CDTs in various fields, such as condition monitoring of assets, real-time monitoring of finished products for operational efficiency, and supporting demand forecasting and production planning (Zheng, Lu, & Kiritsis, 2021). In the field of manufacturing and supply chains, introduced architectures focused on detecting anomalous behaviour in manufacturing systems, improving operations, and minimizing cost across the supply chain (Qamsane et al., 2019; Raleanu, Borangiu, Ivanescu, Morariu, & Anton, 2019). A CDT architecture was proposed for real-time monitoring and evaluation for a manufacturing flow-shop system (Qamsane et al., 2019). The CDT platform could forecast and identify abnormalities using the available data from interconnected cyber and physical spaces. In addition, another architecture was introduced for a shop floor transportation system to predict and identify anomalous pallet transportation times between workstations (Raleanu et al., 2019). Based on two different showcases, both architectures showed that CDTs could improve operations through optimal scheduling in real-time and enhanced resource allocation.
A CDT framework was introduced for logistics in a modular construction supply chain (Lee & Lee, 2021). The proposed CDT could predict logistics-related risks and arrival times to reduce costs using IoT and Building Information Modeling (BIM). Furthermore, an approach for a CDT was proposed in agile and resilient supply chains (Kalaboukas, Rozanec, Kosmerlj, Kiritsis, & Arampatzis, 2021). The CDT could predict trends in dynamic environments to guarantee optimal operational performance. This approach was
elaborated through a connected and agile supply chain. The deployed model considers collaboration among different actors as enablers for information exchange, processing, and actuation.
In addition, a deep learning-based approach has been introduced to predict TTR in a three-echelon supply chain (Ashraf, Eltawil, & Ali, 2022). The introduced approach was presented within a theoretically proposed CDSCT framework to enhance SCR. Obtained results showed that predicted TTR values tend to be relatively lower than the actual values at early disruption stages, then improve throughout the progression of the disruption effect on the supply chain network.
It has been observed from the literature that many recent contributions were directed towards SCR in response to the COVID-19 pandemic impact on global supply chains. Many scholars were concerned with quantifying SCR and deploying DSCTs frameworks. On the one hand, deep learning-based models outperformed the classic ones for enhancing SCR. On the other hand, few contributions concerned with enhancing SCR through deep learning-based techniques in a CDSCT environment have been observed. In addition, the literature emphasized the role of CDSCTs in the field of supply chain disruption management. However, few contributions on the implementation of different CDSCT modules for disruption detection was observed. Therefore, this paper contributes to the literature through developing the CDSCT enabling modules for disruption detection.
This paper extends the proposed framework by Ashraf et al. (2022) through incorporating an additional layer for disrupted echelon identification. Furthermore, this paper extends their work by introducing: (1) a hybrid deep learning approach for disruption detection and (2) deep learning-based model for disrupted echelon identification. The introduced approaches are presented as sub-modules of CDSCT for a make-to-order virtual supply chain. In addition, this paper reconsiders inputs for the TTR prediction modules with the aim of obtaining better TTR estimates.
This study tries to answer two research questions. The main research question is "Is there a way to exploit the benefit of cognitive digital twins in the field of supply chain disruption management?" The second research question is "How to validate the introduced framework for incorporating cognitive digital twins into supply chain disruption management. The first research question is addressed by introducing a CDSCT framework that allows early disruption detection in a CDT-enabled make-to-order virtual supply chain. Early disruption detection is enabled through a hybrid deep learning-based approach using a deep autoencoder neural network and the OCSVM algorithm. In addition to early disruption detection, the CDSCT permits disrupted echelon identification and TTR prediction. The first research question is addressed throughout the introduced framework, while the second research question is addressed throughout system the implementation.
## 3 Problem statement
This paper introduces a hybrid deep learning approach for disruption detection within a CDSCT framework to enhance SCR. This approach involves (1) a training phase and (2) an operational phase. The _training phase_ involves training the disruption detection module and models for disrupted echelon identification and TTR prediction. After the training phase, the CDSCT can detect supply chain disruptions, identify disrupted echelons, and predict TTR from disruptions. Figure 0(a) demonstrates the CDSCT during the _operational phase_. Supply chain disruptions are detected based on a real-time data stream from an existing supply chain. The literature indicated that real-time data, enabled by IoT, is collected in multiple means, such as sensors and RFID tags (Ivanov and Dolgui, 2020). Then, the disrupted echelon is identified upon disruption detection, and TTR estimates are obtained. In addition, future supply chain states can be forecasted due to the disruption impact.
Needed supply chain data for training the anomaly (disruption) detection module and TTR prediction model can be obtained from multiple sources. These sources include historical records, real-time data from an IoT-enabled supply chain, or a simulation model depicting a real or a virtual system.
Figure 1: The cognitive digital supply chain twin framework.
Figure 1b demonstrates the framework during the training phase. This phase involves the training based on historical data feed representing the supply chain performance in normal and disrupted states. The disrupted echelon is identified upon disruption detection. Then, a TTR estimate is obtained after feeding the labelled training data to the CDT. In practice, sufficient historical records of disruptions for training purpose may be unavailable due to the unpredictability and low occurrence frequency of disruptive events. In such cases, simulation modelling becomes the most convenient tool for augmenting the training data required for the development of machine learning models. This paper uses simulation modelling to simulate different disruption scenarios. In addition, the developed simulation model is used to generate the required data for training the disruption detection module for a make-to-order virtual three-echelon supply chain.
## 4 Methodology
### Deep autoencoders
An autoencoder is a special type of feedforward neural network trained to copy its input to its output by representing its input as coding (Goodfellow, Bengio, & Courville, 2016). Autoencoder consists of three main components, other than the input and output, (1) encoder, (2) coding, and (3) decoder, Figure 2. Input data compression and decompression through the encoder and decoder, respectively, makes autoencoders ideal for applications involving dimensionality reduction and feature extraction. The coding, \(z\), represents the compressed representation of the input vector x, which contains the most representative information of the input.
Figure 2: Autoencoder architecture.
An autoencoder with three or more hidden layers in the encoder or the decoder network is considered a deep one (Subasi, 2020). The autoencoder is trained to minimise the reconstruction error between the input \(x\) and the output \(\hat{x}\). It is expected that an autoencoder trained on normal (non-disrupted) data will result in a high reconstruction error when given anomalous (distributed) data (Malhotra et al., 2016). Therefore, an autoencoder neural network is used for the problem at hand of disruption detection.
### The one-class support vector machine algorithm
OCSVM is a machine learning algorithm used for binary classification and anomaly detection. Anomaly detection refers to discovering outliers or abnormalities embedded in a large amount of normal data (Ma and Perkins, 2003). OCSVM works in a semi-supervised manner when considering anomaly detection as the application area. During training an OCSVM, it learns to construct the boundary that separates observations under normal conditions from abnormal observation. The work proposed by Ma and Perkins (2003); Scholkopf, Williamson, Smola, Shawe-Taylor, and Platt (1999) introduced the inherent mechanism of OCSVM for anomaly detection in a more detailed manner. OCSVM is usually trained using normal points representing the positive class because full consideration of all disruption scenarios is quite impossible. Then, during operation, the OCSVM checks whether new data points belong to the normal class or not. Suppose an input data point is considered anomalous. In that case, it lies outside the boundary and belongs to the other class, usually referred to as the negative class (anomalous).
The OCSVM algorithm is applied for automatic disruption (anomaly) detection in a three-echelon supply chain. As a binary classification and anomaly detection algorithm, OCSVM was chosen as it enables disruption detection without a prohibitively extensive study of all potential disruption scenarios. The first principal component of the reconstruction error obtained from the autoencoder is used as the input to the OCSVM algorithm. The OCSVM algorithm eliminates the need for statistical analyses to set a threshold above which a data point is considered anomalous. In addition, the OCSVM algorithm does not necessitate any specific assumptions about the data, i.e., reconstruction error is normally distributed (Nguyen et al., 2021).
### Long-short term memory neural networks
LSTM neural network is an important Recurrent Neural Networks (RNNs) class. LSTM neural networks were proposed by (Hochreiter and Schmidhuber, 1997). They provide memory to retain long-term dependencies among input data without suffering from the vanishing gradient problem (Li, Li, Wang, & Wang, 2019). Therefore, LSTM networks are suitable to represent sequential data, i.e., time series. A simple LSTM neural network of one neuron, Figure 2(a), receives an input \(x_{i}\), produces an output \(y_{i}\), and resends that output to itself. When the LSTM neural network is unfolded through time, it has the form
of a chain of repeated modules, Figure 2(b). At each time step (frame) \(i,i\in\{1,2,...,t\}\), that recurrent neuron receives the inputs \(x_{i}\), and its output from the previous time step \(h_{i-1}\), to produce an output \(y_{i}\).
## 5 System implementation
This section lays out the implementation steps for developing the proposed approach on a desktop computer with a 2.9 GHz Intel Core i7 processor and 8 GB RAM. A virtual supply chain is modelled as a discrete event simulation model using AnyLogic 8.7 simulation software. Machine learning models are developed using Python 3.8, Scikit-learn 0.24, and Keras 2.6. The training time for different models ranged between two and five hours.
### The virtual supply chain structure
The three-stage flow line model with limited buffer capacity introduced by Buzacott, Shanthikumar, and George. (1993) is used to develop a make-to-order virtual three-echelon supply chain. It is assumed that there is a single product under consideration, and alternatives to any echelon are not available. Hence, the service protocol permits backlogging. Figure 4 shows the main components of the virtual supply chain with potential sources of disruption.
A single supplier, manufacturer, and distributor constitute the three-echelon virtual supply chain. An additional component, _demand_, corresponds to the initiated customer order quantity. After a customer order is generated, it enters a First-Come-First-Served (FCFS) queue waiting to be fulfilled. The customer order generation rate follows a Poisson distribution with a mean value of \(\lambda\).
The _supplier_ provides the required raw material with a mean rate \(\mu_{1}\). The supplier is assumed to have unlimited buffer capacity. In contrast, the remaining two echelons are assumed to have a limited buffer capacity of ten
Figure 3: A long-short term memory neural network architecture.
units. After the raw material is prepared and delivered to the _manufacturer_, the products are manufactured with a processing rate \(\mu_{2}\). Then, the customer order is ready to be fulfilled through the _distributor_ after being processed with a processing rate \(\mu_{3}\). The processing rates at the supplier, manufacturer, and distributor are assumed to follow an exponential distribution.
Different scenarios are considered to account for the supply chain performance under normal and disrupted circumstances. The normal scenario is denoted by \(S_{0}\), while potential disruption scenarios include unexpected failures at any single echelon \(i\) and are denoted by \(S_{i},i\in\{1,2,3\}\), where echelons 1, 2, and 3 correspond to the supplier, manufacturer, and distributor, respectively. In addition, the surge in demand scenario is considered and denoted by \(S_{4}\). The simulation model parameters assumed values are shown in table 1.
Several parameters and metrics reflecting the supply chain state and performance are monitored. The parameters include (1) the interarrival time, \(T_{a}\), and (2) the processing time at echelon \(i\), \(T_{pi},i\in\{1,2,3\}\). Monitored metrics include
\begin{table}
\begin{tabular}{l l} Parameter & Value \\ \hline Number of replications, \(N\) & 300 replications \\ Replication length, \(RL\) & 1095 days \\ Warm-up period length, \(WL\) & 180 days \\ Arrival rate, Poisson(\(\lambda\)) & 15 units/day \\ Number of orders per arrival, \(Q_{a}\) & 1 unit \\ Supplier service rate, Poisson(\(\mu_{1}\)) & 18 units/day \\ Supplier buffer capacity, \(q_{1}\) & \(\infty\) \\ Supplier server capacity, \(c_{1}\) & 1 unit \\ Manufacturer service rate, Poisson(\(\mu_{2}\)) & 19 units/day \\ Manufacturer buffer capacity, \(q_{2}\) & 15 units \\ Manufacturer server capacity, \(c_{2}\) & 1 unit \\ Distributor service rate, Poisson(\(\mu_{3}\)) & 20 units/day \\ Distributor buffer capacity, \(q_{3}\) & 10 units \\ Distributor server capacity, \(c_{3}\) & 1 unit \\ Disruption duration, \(D_{d}\) & \(D_{d}\in[30,60]\) days \\ Disruption occurrence, \(D_{t}\) & \(D_{t}\in[300,600]\) days \\ Disrupted arrival rate, Poisson(\(\lambda_{d}\)) & 30 units/day \\ Disrupted processing rate, Poisson(\(\mu_{di}\)) & \(\mu_{di}\)\(=\) 0 units/day \(\forall i\in\{1,2,3\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Simulation model parameters.
Figure 4: Virtual supply chain components with potential sources of disruptions.
(1) units in the system \(WIP\), (2) queue length at echelon \(i\), \(L_{qi},i\in\{1,2,3\}\), (3) lead time \(LT\), (4) flow time \(FT\), and (5) the daily output \(K\) in units. _Lead time_ refers to the total time between customer order generation and fulfilment. The _flow time_ refers to the elapsed time from the order beginning of processing by the supplier until fulfilment. Daily records are averaged throughout the day. The \(WIP\) and \(L_{qi}\) are recorded on an hourly basis, while the remaining parameters and metrics are recorded upon order fulfilment.
#### Simulation model validation
The simulation model is validated using the closed-form model given by Buzacott et al. (1993) for a particular system configuration. That configuration assumes an infinite number of orders in front of the supplier. In addition, buffer capacity is not allowed at either the manufacturer or the distributor. The calculated rate at which orders leave the system (output rate) for that configuration, using the closed-form model, is compared to the estimated rate from the simulation model.
The simulation model is validated before generating the required data sets to verify the introduced approach. Therefore, a total of 916 single day replications are used for validation. The calculated output rate from the closed-form model is 10.69 units per day. The estimated output rate was \(10.48\pm 0.221\) units per day with a 99% confidence level. Moreover, a comparison between the calculated and estimated rates using a Z-test shows no significant difference with a 0.01 significance level.
#### Data sets generation
Five data sets are generated corresponding to represent five scenarios \(S_{i},i\in\{0,1,2,3,4\}\). These scenarios consider both normal and disrupted circumstances. The generated data sets for each scenario represent a multivariate time series that consists of 916 time records per replication. Each time step includes thirteen parameters (features). These features are (1) interarrival time, (2) supplier processing time, (3) manufacturer processing time, (4) distributor processing time, (5) supplier queue length, (6) manufacturer queue length, (7) distributor queue length, (8) work in process, (9) lead time, (10) flow time, (11) waiting time, (12) processing time, and (13) daily output.
Each disruptive event has a direct impact on some input features in the generated datasets. The surge in demand is represented by a decrease in feature (1) which consequently results in an increase in feature (5), (9), and (11). The second type of disruptive events, capacity loss at any echelon disrupts the whole system and affects features (2-13). For example, considering the capacity loss at the supplier, some of affected features is impacted directly, such as feature (2), and others are impacted indirectly, such as feature (6), because the discontinuity of incoming material flow from the supplier due to the disruptive event.
### The disruption detection module
A semi-supervised hybrid deep learning approach is adopted to detect disruptions in the above-mentioned virtual supply chain, as depicted in Figure 5. The monitored supply chain parameters and performance metrics produce a multivariate time series with multiple time-dependent variables. Consequently, each variable may depend on other variables besides time dependency, making building an accurate model for disruption detection and TTR prediction a complex task. Therefore, a hybrid deep learning-based approach is adopted to tackle this challenge by using automatic feature extraction and learning of the underlying patterns in the input data.
#### 5.2.1 Data preprocessing
TThe input time series data are split into train, validation, and test sets using a split ratio of 60%, 20%, 20%, respectively, for all scenarios. Due to different scales on which input variables are measured, data preprocessing is carried out by normalising the inputs using a min-max scaler, Equation 1.
\[x_{norm}^{i}=\frac{x^{i}-x_{min}^{i}}{x_{max}^{i}-x_{min}^{i}},i\in\{1,2,...,k\} \tag{1}\]
where \(x_{norm}^{i}\) denotes the normalised vector for a time-variate variable. \(x_{min}^{i}\) and \(x_{max}^{i}\) are the minimum and maximum values of vector \(x^{i}\), and \(k\) is the number of variables in (length of) the time series. Due to the relatively long time series, a sliding window of size 14 is applied as a preprocessing step. Afterwards, deep autoencoders and OCSVM algorithm detect disruptions based on the first principal component of the reconstruction error. Moreover, two LSTM neural networks are used to identify the disrupted echelon and predict TTR.
Figure 5: The proposed approach for the disruption detection module in a cognitive digital supply chain twin environment.
#### 5.2.2 Disruption detection
A deep autoencoder of three encoder-decoder pairs is developed to reconstruct the inputs. The hidden and coding layers have a size of 256, 128, 64, and 32, respectively. The learning rate and batch size are set to \(10^{-4}\) and 128, respectively. The autoencoder is trained for 1000 epochs using input data considering normal circumstances generated from the scenario \(S_{0}\). An epoch refers to a complete pass made by the model on the input data set during training.
In the beginning, the OCSVM algorithm is trained using the first principal component of the obtained absolute error vectors for the test set only under normal circumstances, considering the scenario \(S_{0}\). Then, the OCSVM algorithm is tested using the test sets under disrupted circumstances under scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\). Model hyperparameters \(\nu\) and \(\gamma\) are set to 0.025 and 100, respectively. The first hyperparameter, \(\nu\), controls the sensitivity of the support vectors, while the latter, \(\gamma\), controls the boundary shape. High values of \(\nu\) lead to a more sensitive model, while high values of \(\gamma\) result in an overfit to the training data. At the end of this section, balancing model sensitivity with other performance metrics is discussed.
The OCSVM model results are mapped to a labelled data set for further performance evaluation. The selected performance metrics for the disruption detection model include (1) accuracy, (2) precision, (3) recall, and (4) F1-score. The accuracy, Equation 2, describes the overall model performance by calculating the ratio of correctly identified observations to the total observations. The precision, Equation 3, determines the ratio of correctly identified normal observations to the total number of normal observations. On the contrary, the recall, Equation 4, defines the model sensitivity by realising the ratio of correctly identified normal observations to total observations identified as normal. Finally, the F1-score, Equation 5, is a weighted average of precision and recall.
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{FP}+\text{FN}+ \text{TN}} \tag{2}\]
\[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{3}\]
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{4}\]
\[\text{F1-score}=2\times\frac{\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}} \tag{5}\]
where TP, FP, FN, and TN are true positive, false-positive, false-negative, and true negative. The true positive refers to the number of correctly identified observations as normal. In contrast, the false-positive represents the number of incorrectly identified observations as normal. False-negative defines the number of abnormal observations that are incorrectly identified as normal. The true negative represents the number of abnormal observations that are correctly identified.
In order to provide the decision-maker with more relevant measures, another two additional performance measures, (1) lag and (2) false-positive
percentage, are introduced. The _lag_ describes the encountered delay in disruption detection. On the other hand, the ratio of incorrectly classified observations prior to disruption occurrence defines the _false-positive percentage_. These additional performance measures provide a better understanding of the impact of changing model hyperparameters on model performance.
The OCSVM-based disruption detection model hyperparameters are selected by adopting a grid search approach. The main objective is to find the best performing combination of hyperparameter values based on different performance measures. Figure 6 summarises the results from the grid search concerning the effect of changing \(\nu\) and \(\gamma\) values on different performance measures. The x-axis represents \(\nu\) on a linear scale, while the y-axis represents \(\gamma\) using a log scale. A good model performance can be represented by a combination of high values of accuracy and F1-score in addition to low false alarm percentage. Evidently, better performance is realised at \(\nu\) in the range below 0.1 and relatively moderate values of \(\gamma\) between 0.1 and 100.
Figure 6: Grid search results for one-class support vector machine hyperparameter selection.
Further analysis is conducted to examine the individual effect of each hyperparameter while the other is fixed on the mean lag and false alarms within the range where good model performance has been observed. Figures 6(a) and 6(b) show the effect of changing \(\nu\) while \(\gamma\) is fixed at different \(\gamma\) values. The x-axis represents \(\nu\) while the y-axis represents the performance measure value. As indicated from the shown graphs, \(\gamma\) barely affects the performance measures. On the contrary, \(\nu\) significantly affects model's performance at \(\nu\leq 0.1\).
Figures 6(c) and 6(d) examine the effect of changing \(\gamma\) while \(\nu\) is fixed at \(\nu\leq 0.1\). The x-axis represents \(\gamma\) on a log scale while the y-axis represents the performance measure value. As per the shown graphs, \(\gamma\) does not have a significant effect on the performance measures when compared to \(\nu\) at \(\gamma\in[0.01,1000]\). On the contrary, \(\nu\) significantly affects the model's performance. On the one hand, the increase in \(\nu\) results in a significant improvement in the mean lag, but, more false alarms arise. Therefore, the selected values for \(\nu\) and \(\gamma\) are chosen to achieve as short lags as possible with the fewest false alarms.
Figure 7: Effect of changing \(\nu\) and \(\gamma\) values.
#### 5.2.3 Disrupted echelon identification
An LSTM neural network classifier is developed to identify the disrupted echelon upon disruption detection. The LSTM classifier is trained in a fully supervised manner. Therefore, the input class sequence is converted from a class vector to a binary class matrix using one-hot encoding. Then, the train and validation sequences are used to train the classifier with a learning rate of \(10^{-4}\) and a batch size of 32 for 20 epochs. Finally, the classifier is tested using the test set. The LSTM neural network classification model consists of two LSTM layers. Each layer has 16 units and a dropout rate of 0.1.
#### 5.2.4 Time-to-recovery prediction
An LSTM neural network-based model is developed to predict TTR using the incoming signal from the simulation model for different parameters and metrics. Different hyperparameter values are tested, and the best-performing set is chosen to predict the TTR based on the minimum validation loss. These hyperparameters are used to develop four TTR prediction models by considering a single disruption scenario at a time. The four TTR prediction models correspond to the four potential disruption scenarios \(S_{i},i\in\{1,2,3,4\}\).
Each model has two LSTM layers with 64 LSTM units each. The learning rate is set to \(10^{-4}\) and the dropout rate is 0.1 for each layer. An \(l1\) regularisation is applied to the first layer with a regularisation factor of \(10^{-3}\). Each model is trained with a batch size of 16 for twenty epochs. Each model is evaluated based on (1) Mean Absolute Error (MAE), (2) Mean Squared Error (MSE), (3) Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). Each performance measure is given by Equation 6, 7, 8, and 9, respectively.
\[\text{MAE}=\frac{\sum\lvert y-\hat{y}\rvert}{N} \tag{6}\]
\[\text{MSE}=\frac{\sum\left(y-\hat{y}\right)^{2}}{N} \tag{7}\]
\[\text{RMSE}=\sqrt{\frac{\sum\left(y-\hat{y}\right)^{2}}{N}} \tag{8}\]
\[\text{MAPE}=\frac{\sum\frac{\lvert y-\hat{y}\rvert}{y}}{\text{N}} \tag{9}\]
where \(N\) is the number of TTR observations, while \(y\) and \(\hat{y}\) represent the actual and predicted TTR vectors.
## 6 Results
The generated data for the virtual supply chain are used to verify the proposed approach. This section evaluates the performance of different modules, mainly the disruption detection module, disrupted echelon identification, and time-to-recovery prediction.
### Simulation-generated data sets
After the simulation model is validated, a single data set for each scenario is generated. Then, each data set was labelled and normalised. Finally, each data set was split into train, validation, and test sets. The train and validation sets for scenario \(S_{0}\) were used to train the deep autoencoder model. Then, the test set for the \(S_{0}\) scenario was used for testing the deep autoencoder model and the OCSVM algorithm. In addition, the test sets for scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\) were used for testing the deep autoencoder model, evaluating OCSVM algorithm performance, testing the disrupted echelon classification model, and TTR prediction models. The disrupted echelon classification model and TTR prediction models were trained and validated using the train and validation sets for scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\).
### Disruption detection using deep autoencoders and one-class support vector machine algorithm
The deep autoencoder is trained using sequences of \(14\)timesteps \(\times\)\(13\)features. These sequences are generated by applying a sliding window of size \(14\). Input sequences are converted to a one-dimensional vector due to the inability of the autoencoder to process two-dimensional data as input. The flattened vector has a length of \(182\) elements. The MAE Function is used to evaluate the autoencoder model loss. The model loss represents differences between the actual values and the estimations from the model. A learning curve compares the model loss on training and validation data sets. The obtained learning curve demonstrates a slight difference between both data sets, which ensures a good fit of the autoencoder model, Figure 8. A significant model loss decrease is noted during the first \(100\) epochs, followed by a gradual decrease until stability at epoch number \(900\).
Figure 8: The learning curve during training the autoencoder.
After training the autoencoder model, it is used to obtain the absolute reconstruction error using the test sets under normal and disrupted circumstances. The absolute reconstruction error \(e_{t}^{i}\) for feature \(i\) at time \(t\) is given by Equation 10.
\[e_{t}^{a}=\sum_{i=1}^{k}\lvert x_{t}^{i}-\hat{y}_{t}^{i}\rvert \tag{10}\]
where \(x_{t}^{i}\) and \(\hat{x}_{t}^{i}\) are the actual and estimated values of the test set for feature \(i\) at time \(t\), respectively. A significant difference between the normal and abnormal circumstances was realised due to the low values for the first principal component under normal circumstances. The vast majority of the first principal component values under normal circumstances fall below \(-0.2\), which are much lower than those under disruption and recovery, which falls between \(-0.5\) and \(3.5\).
Then, the OCSVM algorithm is trained using the first principal component vector of the obtained reconstruction error under normal circumstances, which defines the positive class. The first principal component explains 92.39% of the overall variability in the absolute reconstruction error across input features. Finally, the first principal component vector under disrupted circumstances is used for disruption detection using the trained OCSVM disruption detection model. Table 2 shows the performance evaluation results for the disruption detection model.
There is a considerable difference between the model performance for both data sets. However, the disruption detection model achieved good performance under disrupted circumstances. The high recall value implies that 97.6% of these observations are correctly identified among all normal observations.
Incorrectly classified observations under normal circumstances exist due to the model's sensitivity to outliers in the train data. The reconstruction error is affected by noise, representing instantaneous disruptions (operational variability). That variability produces extremely low or high values for the principal component of the reconstruction errors under normal circumstances, affecting the OCSVM algorithm performance. Consequently, model sensitivity to such variability is a matter which requires further investigation.
The false-positive percentage reflecting the percentage of false alarms prior to disruption is 2.5%. The false alarm count is 1530, roughly corresponding to approximately seven incorrect observations per replication. The average delay in disruption detection (lag) is 7.1 days. The lag distribution is shown in Figure 9. The maximum and median lag values are 23 and 4 days, respectively.
\begin{table}
\begin{tabular}{l c c c c} Data set & Accuracy & Precision & Recall & F1-score \\ \hline Test-\(S_{0}\) & 97.5\% & 100.0\% & 97.5\% & 98.73\% \\ Test-\(S_{i}\)\(\forall i\in\{1,2,3,4\}\) & 87.28\% & 84.25\% & 97.6\% & 90.43\% \\ \hline \end{tabular}
\end{table}
Table 2: Performance measures after applying one-class support vector machine algorithm.
Despite the apparent good model performance, the realised lag is a matter of concern depending on the anticipated speed in detecting disruptions. The trade-off between achieving shorter delays and reducing false alarms depends on the model sensitivity, controlled by the hyperparameter \(\nu\). Although small hyperparameter values are recommended to achieve few false alarms, the disruption detection model becomes less sensitive to disruptions (anomalies). Thus, a significant increase in maximum lag (delay) is encountered. Large \(\nu\) values can achieve an efficient disruption detection model through delay minimization. However, the model becomes too sensitive to, leading to many false alarms and poor performance in terms of accuracy, precision, recall, and F1-score. Therefore, the decision-maker should compromise the combination between the acceptable limits for the performance measures. A suggested solution is to maintain shorter delays. The false alarms can be handled using the proposed LSTM neural network classification model.
The first principal component of the obtained absolute error for a single replication and different scenarios is plotted against time, Figure 10. The left y-axis represents the first principal component, while the right y-axis represents the corresponding metric/performance measure for each scenario in days. The first principal component for all disrupted scenarios is notably higher than the scenario under normal circumstances. The red dots refer to the anomalous points. Some points before the estimated recovery are normal points, affecting the model performance measures since the data are labelled based on a predefined threshold.
Figure 9: Disruption detection delay distribution.
Figure 10: The one-class support vector machine algorithm results.
### Disrupted echelon identification using long-short term memory neural network model
The LSTM model for disrupted echelon identification is trained to learn the multivariate time series pattern. The model is trained using the train and validation data sets for scenarios \(S_{i}\)\(\forall i\in\{1,2,3,4\}\). The model should predict the most likely class to which a given sequence belongs. Input data are labelled to consider the disrupted echelon, recovery phase, and normal circumstances during pre-disruption and post-recovery phases. The categorical cross-entropy function \(J\), Equation 11, is used for model evaluation during training (Geron, 2019).
\[J=-\sum_{k=1}^{N}y_{i,k}.\log\left(p_{i,k}\right) \tag{11}\]
where \(N\) is the number of classes, \(y_{i,k}\in\{0,1\}\) is a binary indicator if class label k is the correct classification for observation \(i\), and \(p_{i,k}\in[0,1]\) is the predicted probability observation \(i\) is of class \(k\). Lower cross-entropy values indicate better convergence of predicted sample probability towards the actual value. The learning curve shows a significant loss decrease after a few epochs, Figure 11.
Once the LSTM neural network model for disrupted echelon identification is trained, it is tested using the test data. The model performance is evaluated using precision, recall, and F1-score. Overall, the model performs well except for identifying the recovery phase, Table 3. The precision during recovery is highly affected by the incorrectly classified observations that belong to the normal class, as depicted by the confusion matrix, Figure 12. The confusion matrix summarises the LSTM model classification results by showing the count values for each class.
Figure 11: The learning curve for long-short term memory classification model.
### Time-to-recovery prediction using long-short term memory neural network models
The TTR is predicted based on an LSTM neural network prediction model. The model is trained to predict TTR based on multivariate inputs considering a single disruption scenario at a time. Therefore, four prediction models are developed to correspond to each disruption scenario \(S_{i},i\in\{1,2,3,4\}\). Training and validation data sets are used to train the proposed models. The MAE
\begin{table}
\begin{tabular}{l c c c} Disruption class & Precision & Recall & F1-score \\ \hline Normal & 98\% & 97\% & 98\% \\ Surge in demand & 96\% & 98\% & 97\% \\ Capacity loss at the supplier & 100\% & 98\% & 99\% \\ Capacity loss at the manufacturer & 100\% & 100\% & 100\% \\ Capacity loss at the distributor & 100\% & 99\% & 99\% \\ Recovery & 95\% & 96\% & 96\% \\ \hline \end{tabular}
\end{table}
Table 3: Performance measures for long-short term memory classification model.
Figure 12: Confusion matrix.
function monitors the loss for each model. The four models possess a rapid loss decrease after a few epochs, and stability is realised after the eighth epoch, Figure 13.
The TTR prediction models are tested using the test sets considering different disruption scenarios \(S_{i},i\in\{1,2,3,4\}\). It is evident from the performance evaluation results, Table 4, that the proposed models perform much better than the results obtained by Ashraf et al. (2022) for all disruption scenarios. Reducing the number of input features has significantly improved the TTR prediction models performance.
After the TTR prediction models are tested, the actual and predicted TTR are compared at different replications. The TTR values at a randomly selected time step, \(t\), are sketched in Figure 14. The predicted TTR values tend to be slightly lower than the actual ones. However, minor variations exist in many cases. The TTR prediction error is obtained by calculating the difference
Figure 13: Obtained learning curves for time-to-recovery prediction models.
between actual and predicted TTR values. Figure 15 shows the corresponding prediction error to the data used in Figure 14. Significant positive deviations pertain to the early disruption stages.
The progression of predicted TTR values is further examined for a single replication considering different disruption scenarios, Figure 16. A short delay in TTR prediction is observed at early disruption stages. That delay is followed by a higher TTR prediction than the actual. By the end of the disruption, the predicted TTR values tend to be close to the actual ones.
## 7 Managerial implications
Data-driven driven digital supply chain twins offer better end-to-end supply chain visibility and, consequently, enhanced supply chain resilience. DSCTs can monitor and determine the supply chain state as well as provide useful information and insights for decision-making support. Integrating the proposed models, in this paper, into a cognitive digital supply chain twin helps decision-makers make appropriate decisions based on real-time disruption detection
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Scenario} & \multicolumn{6}{c}{Obtained error measures} & \multicolumn{3}{c}{Ashraf et al. (2022)} \\ \cline{2-9} & MAE & MSE & RMSE & MAPE & MAE & MSE & RMSE \\ \hline \(S_{1}\) & 15.32 & 1658.48 & 40.72 & 0.21 & 33.08 & 4142.2 & 64.36 \\ \(S_{2}\) & 17.25 & 1796.42 & 42.38 & 0.235 & 30.52 & 2259.71 & 47.54 \\ \(S_{3}\) & 13.36 & 1193.82 & 34.55 & 0.212 & 43.68 & 5975.85 & 77.3 \\ \(S_{4}\) & 12.8 & 1291.31 & 35.93 & 0.259 & 30.58 & 1867.69 & 43.22 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Selected error metrics for time-to-recovery prediction models on the test sets.
Figure 14: Time-to-recovery predictions versus actual values.
data. Early disruption detection allows for early deployment of recovery policies, minimising negative impact due to disruption, leading to quicker recovery and improved supply chain resilience. In addition, the disrupted echelon identification at early disruption stages allows the decision-makers to find other alternatives that mitigate disruption impact. Furthermore, obtaining the predicted Time-To-Recovery (TTR) at early stages provides an estimate for the
Figure 16: Time-to-recovery prediction evolution along with disruption progression.
Figure 15: Time-to-recovery prediction errors.
duration of contractual agreements, if they exist, when considering different options.
## 8 Conclusion
This paper introduced a new hybrid deep learning-based approach for disruption detection within a data-driven cognitive digital supply chain twin framework. Referring to the first research question "Is there a way to exploit the benefit of cognitive digital twins in the field of supply chain disruption management?" The presented approach mainly contributes to the field of supply chain disruption management by offering better end-to-end supply chain visibility which enhances supply chain resilience through enabling real-time disruption detection, disrupted echelon identification, and time-to-recovery prediction. The developed modules permit the CDSCT to detect disruption occurrence though combining a deep autoencoder neural network with a one-class support vector machine classification algorithm. Then, if a disruption is detected, long-short term memory neural network models identify the disrupted echelon and predict time-to-recovery from the disruption. Referring to the second research question: "How to validate the introduced framework for incorporating cognitive digital twins into supply chain disruption management?" The presented framework is validated under several potential disruption scenarios in a virtual three-echelon supply chain. The disruption scenarios accounted for the surge in demand and unexpected failures at any echelon.
The obtained results indicated a trade-off between disruption detection model sensitivity, encountered delay until disruption detection, and false alarm count. Based on the excellent performance of the proposed model for disrupted echelon identification, that model may be suggested to replace the former approach for disruption detection based on deep autoencoder and one-class support vector machine algorithm. However, the OCSVM algorithm-based anomaly detection model is indispensable because it does not require an extensive definition of all possible disruption scenarios. Developed models for time-to-recovery prediction revealed that predicted time-to-recovery values tend to be lower than the actual ones at early disruption stages. Then, these predictions improve throughout disruption progression with slight variation.
Current research limitations include (1) the difficulty in accurately identifying the transition of the system from a disrupted state to a fully recovered one, (2) considering a single type of disruption at a time, and (3) as a first initiative, the introduced approach has only been tested on simulation-generated data set. Future work directions may include (1) investigating the concurrent occurrence of more than one disruption type, (2) developing a dynamic forecast model to forecast possible supply chain states upon disruption detection, (3) integrating the cognitive digital supply chain twin with an optimization engine to optimize operational decisions to enhance supply chain resilience,
(4) examining the performance of other machine learning algorithms, and (5) applying the introduced framework to a real-world case.
## Acknowledgement
Thanks to Prof. Amin Shoukry (Department of Computer Science Engineering, Egypt-Japan University of Science and Technology, New Borg El-Arab City, Alexandria, Egypt) for his valuable guidance.
## Declarations
### Funding
This work was supported by the Egyptian Ministry of Higher Education (Grant number 10.13039/501100004532) and the Japanese International Cooperation Agency (Grant number 10.13039/501100002385).
### Conflict of interest/Competing interests
The authors have no competing interests to declare that are relevant to the content of this article.
### Availability of data and materials
The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
### Authors' contributions
All authors contributed to the study conception and design. The first draft of the manuscript was written by Mahmoud Ashraf and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
|
2309.15649 | Generative Speech Recognition Error Correction with Large Language
Models and Task-Activating Prompting | We explore the ability of large language models (LLMs) to act as speech
recognition post-processors that perform rescoring and error correction. Our
first focus is on instruction prompting to let LLMs perform these task without
fine-tuning, for which we evaluate different prompting schemes, both zero- and
few-shot in-context learning, and a novel task activation prompting method that
combines causal instructions and demonstration to increase its context windows.
Next, we show that rescoring only by in-context learning with frozen LLMs
achieves results that are competitive with rescoring by domain-tuned LMs, using
a pretrained first-pass recognition system and rescoring output on two
out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with
fine-tuning we achieve error rates below the N-best oracle level, showcasing
the generalization power of the LLMs. | Chao-Han Huck Yang, Yile Gu, Yi-Chieh Liu, Shalini Ghosh, Ivan Bulyko, Andreas Stolcke | 2023-09-27T13:36:03Z | http://arxiv.org/abs/2309.15649v2 | Generative Speech Recognition Error Correction with Large Language Models and Task-Activating Prompting
###### Abstract
We explore the ability of large language models (LLMs) to act as speech recognition post-processors that perform rescoring and error correction. Our first focus is on instruction prompting to let LLMs perform these task without fine-tuning, for which we evaluate different prompting schemes, both zero- and few-shot in-context learning, and a novel "task activation" prompting method that combines causal instructions and demonstration to increase its context windows. Next, we show that rescoring only by in-context learning with frozen LLMs achieves results that are competitive with rescoring by domain-tuned LMs, using a pretrained first-pass recognition system and rescoring output on two out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with fine-tuning we achieve error rates below the N-best oracle level, showcasing the generalization power of the LLMs.
Chao-Han Huck Yang, Yile Gu, Yi-Chieh Liu, Shalini Ghosh, Ivan Bulyko, Andreas Stolcke Amazon, USA
**Index Terms**: large language model, N-best rescoring, instruction prompting, few-shot learning, in-context learning.
## 1 Introduction
Large-scale language models (LLMs) have exhibited outstanding performance on downstream tasks by conditioning on input information, including task descriptions (e.g., performing mathematical calculations) or a limited number of input-output pairs obtained from training text (e.g., goal-oriented demonstrations). This new capability of task-specific inference from contextual information has been referred to as "_in-context learning_" in Brown _et al._[1]. More specifically, the ability to learn in-context has been reported in previous studies [2] of pretrained LLMs with over \(100\)B parameters trained with an unsupervised auto-regressive objective. Although recent advances in in-context learning have consistently demonstrated excellent performance on a wide range of tasks [3], there have been limited studies on the interaction or benefits of in-context learning on automatic speech recognition (ASR) tasks. As an example, contextual information [4] has been shown to play a vital role on ASR applications in complex domains, such as recognizing utterances referring to trending news.
One open question in the development of robust ASR applications is _how_ recent in-context learning frameworks can utilize their zero-shot learning capability to enhance ASR systems. Meanwhile, scaling ASR model sizes up to \(10\)B parameters [5] by itself has not proven adequate for achieving high performance on challenging (e.g., conversational) speech tasks from domain-specific data. The challenge to obtain better generalization of neural ASR models has motivated proposals to incorporate external knowledge from textual data [6]. For instance, one way to improve the RNN-transducer is to incorporate an external LM [7] for domain-aware adaptation in streaming-based applications. However, the external LM size is often limited to a range of \(10\)M to \(100\)M for on-device deployment. Given these limitations, cloud-based second-pass rescoring with LLMs may be a promising approach that leverages frozen pretrained models and leverages in-context learning.
Toward this end, in this work we explore novel ASR post-processing pipelines that utilize frozen LLMs by exploiting in-context learning. We consider two ASR second-pass pipelines, as shown in Figure 1:
\(\mathcal{P}\)**ipeline 1:** a standard rescoring system takes in N-best output from a first ASR pass, and is trained to minimize the word error rate (MWER) by reranking the hypotheses. As illustrated in Figure 1(a), an LLM in-context learning process is inserted into the pipeline to post-process first-pass hypotheses to apply error correction.
\(\mathcal{P}\)**ipeline 2:** a new task-activating prompting method is used to initialize the frozen LLM with task-oriented instruc
Figure 1: Two ASR post-processing frameworks using LLMs: (a) correct errors (e.g., grammar [8]) before applying a standard rescoring model, or (b) perform zero/few-shot rescoring; with optional task-activating prompting (Section 3.2).
tions. A list of N-best ASR hypotheses is formatted as input to the LLM, thus allowing "in-context learning initialization" and/or "in-domain fine-tuning" (e.g., using adapters for parameter-efficient model update) that results in an improved speech transcription.
In the remaining sections we present a first exploration of this novel way to utilize LLMs for the ASR task, demonstrate its surprising effectiveness, and compare results with different in-context learning schemes, as well as those of standard rescoring methods.
## 2 Related Work
**LLM-based post-processing to improve hypotheses.** Error correction post-processing [9, 10] aims to fix grammar or deletion errors in output sentences and has been shown to improve the first-pass hypotheses generated from end-to-end ASR. A key characteristic of correction techniques is their reliance on pretrained LLMs, which benefit from rich contextual information. Liao _et al._[9] propose ASR post-processing for readability, by extracting semantic expressions and generating readable text from ASR transcriptions. N-best T5 [11] used the T5 encoder-decoder architecture for rescoring with discriminative training.
**Zero-shot learning for acoustic and language modeling.** Prior work has demonstrated that language modeling can generalize to zero-shot multi-tasks without exemplars [3, 12, 13]. However, zero-shot and few-shot language modeling techniques often rely on the fine-tuning, which requires redeployment of pretrained models.
**In-context learning based on information prompting.** In-context learning (ICL) [1, 14] induces a single model to perform domain-agnostic inference without fine-tuning by providing a single or few prompts, thus addressing the aforementioned limitations. Prior study [2] has shown the ground truth demonstrations impose smaller effect than the author expected and significant zero-shot performance improvement under ICL framework. It implies external information gain can be extracted from frozen pretrained LLMs itself, if we select correct prompting strategy. However, ICL has its own shortcomings regarding tasks of reasoning. The chain-of-thought (CoT) prompting [15] decomposes the reasoning tasks by providing models with a sequence of questions or prompts that gradually guide the model to make predictions for a target task. While employment of CoT prompting is attributed to few-shot setup, LLMs has been proven to be the zero-shot reasoner given the single and specific prompt [16]. In this work, we applied the above ICL techniques to ASR rescoring for the first time and empirically evaluate the their performance individually against the baseline.
## 3 Method
We now review some recent advances in in-context learning techniques [15, 1, 16] and describe how they can be incorporated into second-pass rescoring applications.
### In-Context Learning Background and Techniques
In-context learning [1] can emerge from modeling long-range coherence in the pretraining data. Based on a recent theoretical justification [17] by Bayesian inference, LLM would have implicitly learned to infer a latent concept during its pretraining stage. As an empirical result, in-context learning occurs if the LM can still infer the shared concept across examples (e.g., task instruction or prompts) to perform a target task. To model the in-context learning process, we can formulate its distribution over token \(o\) within the vocabulary \(O\) by sampling a latent _confounding variable_[18]\(\theta\) from its population \(\Theta\).
The prediction over the pretraining distribution could be inferred by marginalizing over the confounding variable \(\theta\):
\[p_{\text{prompt}}=p(o_{1},...o_{T})=\int_{\theta\in\Theta}p(o_{1},...o_{T}| \theta)p(\theta)\,d\theta. \tag{1}\]
Under the in-context learning framework, prediction sequence \(O_{i}\) is inferred from the pretraining distribution conditioned on a prompt variable \(\theta^{*}\), test-time sample (questions we would like to answer) \(x_{\text{test}}\) and its in-context predictor \(p_{\text{prompt}}(y|x)\):
\[y_{\text{test}}\sim p_{\text{prompt}}(y|x_{\text{test}},\theta^{*}). \tag{2}\]
For instance, a simple prompt to empower in-context learning is to directly provide a _"task-oriented question"_ to the pretrained LLM, as shown in Figure 2(a). We further illustrate more in-context learning setups in the following subsections.
#### 3.1.1 Zero-shot domain-hint prompting
In the zero-shot setting, given a prompt template function \(r()\) and \(\theta^{*}\) as the domain-specific confounding variable (e.g., air
Figure 2: Four LLM in-context learning uses for ASR 2nd pass
line travel), a pretrained LLM models the conditional probability of the original input \(x\) and target \(y\), even if they were never trained, into their template function \(r_{x}(x)\) and \(r_{y}(y)\).
\[r_{y}(y_{\text{test}})\sim p_{\text{prompt}}(r_{y}(y)|r_{x}(x_{\text{test}}), \theta^{*}). \tag{3}\]
In this work, we consider two general acoustic domains for making template function as a hard-coded input of "_airline information_" or "_financial market_", as shown in Figure 2(b).
#### 3.1.2 Zero-shot reasoning
Zero-shot reasoning [16] employs chain-of-thought prompting [15] in a zero-shot setting with only two prompts: (i) reasoning extraction and (ii) answer extraction. Based on [16], the reasoning extraction step uses a fixed and canonical prompt: _Let's think step by step_, as shown in Figure 2(c).
In our experiments, we noticed that the reasoning extraction prompt is essential to boost the performance of zero-shot LLM rescoring. Under this zero-shot self-reasoning setup, the LLM output will first explain the task it is working on, then produce the actual task output. In the case of zero-shot rescoring, the LLM will first define a ASR-LM rescoring tasks and then provide LM scores for each N-best hypothesis.
#### 3.1.3 Few-shot and one-shot in-context learning
A standard few-shot in-context learning process uses pairs of demonstrations [1] "questions and targeted tasks," retrieved from training data, to inform frozen LLMs for performing the target output, as illustrated in Figure 2(d). One-shot in-context learning takes place by using a single demonstration as an input prompt for the frozen LLMs. Note that demonstrations (from an unseen training set) are distinct from test examples, and that the unsupervised-trained LLM has been reported to have a memory bottleneck based on term frequencies [19], which avoids potential data leakage issues for its few-shot learning evaluation reported in previous work [2, 1, 19].
#### 3.1.4 N-best hypotheses to transcription fine-tuning
We introduce a hypotheses to transcription (H2T) mapping loss function: \(\mathcal{L}_{\text{H2T}}=\sum_{i=1}^{N}-\{\log P(y^{*}|x_{i},\mathbf{\Theta})+ \lambda\cdot\text{MSE}(s_{i},P(y^{*}|x^{(i)},\mathbf{\Theta}))\}\), where \(P(y^{*}|x_{i},\mathbf{\Theta})\) represents the probability of the true transcription (\(y^{*}\)) given the _i_-th hypothesis (\(x_{i}\)) and the model parameters (\(\mathbf{\Theta}\)). To integrate acoustic information, a regularization term using mean squared error (MSE) is applied to penalize the model when there is a significant discrepancy between the predicted probabilities and the posterior probability scores with a \(\lambda\) coefficient of \(0.01\).
Furthermore, we also consider parameter-efficient fine-tuning methods as in [6], which only update a small subset \(\theta\subset\mathbf{\Theta}\) of total trainable parameters to avoid potential overfitting that would hurt the generalization of the LLM [20, 21].
### Task-activating Prompting (TAP) Framework
We now introduce a new in-context learning strategy that triggers the necessary sequential concepts for the ASR rescoring task, by utilizing multiple-round contextual sequences [22]. This technique is referred to as "task-activating prompting" (TAP). In this configuration, the LLM is given leading questions to clarify the task it needs to perform. Following this, the model is instructed to provide an example and, ultimately, it is presented with the top-N hypotheses from which to generate the actual output for the task. In our experiments, we noted that LLMs are capable of producing lists of the top N predictions, made up of utterances with analogous pronunciations. This demonstrates that LLMs assimilate acoustic (e.g., lattice-level) information during their pretraining phase. We illustrate the queries and responses used for task-activating prompting in Figure 3.
We observed that the responses from InstructGPT vary slightly as a function of the random seed. In our experiments, we utilize the API key from OpenAI, and the prompts are fixed, except for the final query that contains the sample and test N-best lists. Recent work on ICL [23] focuses on the selection strategy for in-domain demonstration samples, which clearly could affect inference results. However, we leave demonstration selection for the N-best task to future work, and in our _few-shot_ learning experiments manually select longer demonstration utterances, following [2].
## 4 Experiments and Results
We conduct our experiments by evaluating the final word error rate (WER) of a second-pass rescoring ASR system. First, we employ pretrained ASR models to decode audio from a target domain and collect the top-10 hypotheses. Then, we compare different in-context learning-based rescoring pipelines, including \(\mathcal{P}_{1}\), the LLM-based error correction to enhance a standard ASR rescoring process, and \(\mathcal{P}_{2}\), a prompting scheme that enables the LLM to directly perform ASR rescoring, as shown in Figure 1.
### Pretrained ASR and Rescoring Model Training
We select a conformer-based RNN-transducer [24, 25] (760M parameters) pretrained with \(960\) hours of Librispeech [26], 10k hours of the GigaSpeech [27], 24k hours of English data in VoxPopuli [28], and wav2vec2-based Libri-Light [29, 30] pretraining as the first-pass model to generate the hypothesis list. We follow the conformer architecture presented in [25]. The pretrained model achieves a WER of \(2.45\)% (test-clean) and \(5.45\)% (test-other) on LibriSpeech without external LM.
For \(\mathcal{P}_{1}\), to obtain a standard neural LM for rescoring, we employ an ASR-LM training procedure similar to RescoreBERT [31], which involves a two-stage process. The first stage is adaptation using a first-stage masked language model (MLM) [32] loss, and the second stage is to use minimum
**Q**: Do you know speech recognition?_
_R_**: Yes, I am familiar with speech recognition. Speech recognition, also known as automatic speech recognition (ASR) or speech-to-text, is the process of converting spoken language into text. This technology involves using algorithms and machine learning models to analyze and transcribe the acoustic features of spoken words and phrases. Speech recognition has many applications, including voice-controlled assistants, automated phone systems, and transcription services._
_Q_**: Do you know language model for speech recognition?_
_R_**: Yes, I can provide you with some information about language models used in speech recognition. Language models are an essential component of Automatic Speech Recognition (ASR) systems. They help to predict the likelihood of a sequence of words, given the context. This helps the ASR system to better understand and transcribe spoken language into text. This separate model is typically more complex and powerful than the initial language model used for the transcription, and it is used to re-score the transcription based on the probability of the words occurring in the given context. The rescoring process involves taking the output of the initial language model, which is usually based on statistical methods such as Hidden Markov Models, and then applying a more advanced language model, such as a neural network-based language model, to generate a more accurate transcription. This is accomplished by re-ranking the possible transcriptions based on the probabilities assigned by the more advanced language model. Language model rescoring has been shown to significantly improve the accuracy of speech recognition systems, particularly in noisy or challenging environments where the initial language model may not perform well._
_Q_**: Could you give a possible example of language model rescoring with some hypotheses?_
_R_**: _Nire, here is an example of language model rescoring for ASR with 5-best hypotheses:_
_1. recognize speech with artificial intelligence._
_2. recognized speech with artificial intelligence._
_3. recognize speech with artificial intelligent._
_4. recon eyes speech with artificial intelligence._
_5. recognize peach with artificial intelligence._
_After rescoring, the model assigns a higher probability to the first hypothesis, which is the correct transcription of the input. The other hypotheses receive lower probabilities, reflecting their decreased likelihood given the improved language model. In this case, the top-ranked hypothesis after rescoring would be chosen as the final output: recognize speech with artificial intelligence_
_Q_**: Nice job, I will provide some examples as a demonstration from [target domain]. The 10-best hypothesis is:[hypothesis list from training set], and I would expect your output is: [corresponding transcription]. Following this example, could you report the true transcription from the following 10-best hypotheses:? [hypotheses list for inference]_
### Pretrained LLM Configurations
**GPT-2** (\(1.5\)B): From the Generative Pretrained Transformer (GPT) family of causal models we selected this one [1, 37] as our basic LLM for in-context prompting setup. This version of GPT-21 is 100x smaller than very large teacher models, such as BLOOM [38], making it much more suitable for real-world deployment. GPT-2 is trained primarily using Wikipedia [39] and Common Crawl [40].
Footnote 1: The pretrained model is publicly accessible under MIT License [https://github.com/openai/gpt-2](https://github.com/openai/gpt-2)
**OpenLLaMA** (13B): an open collection of transformer-based, decoder-only, causal language models ranging from 1B to 13B parameters [41]. It is trained exclusively on the RedPajama [42] dataset; we have confirmed that our Linguistic Data Consortium eval sets are not included.
**BLOOM** (\(176\)B): the first open-source LLM trained on the public supercomputer provided by the French government [38], BLOOM is available for reproducible studies of LLMs over \(100\)B. The model is pretrained using a large public collection of \(498\) HuggingFace datasets [43], comprising \(1.61\) TB of text spanning \(46\) natural languages and \(13\) programming languages (our evaluation datasets are not included.)
**InstructGPT** (\(175\)B): an LLM created by training a GPT-3 [1] model through reinforcement learning from human feedback (RLHF) [44]. InstructGPT demonstrates improved zero-shot learning performance, benefiting from human knowledge transferred through RLHF, a process similar to student-teacher learning. InstructGPT is trained using human feedback without using open-domain data for evaluation.2
Footnote 2: InstructGPT is an earlier version of ChatGPT, We did **not** use ChatGPT in our experiments due to its frequent revisions and unclear technical documentation.
### Target-Domain Datasets
We use the pretrained ASR model introduced in Section 4.1 to decode two public datasets, both in \(\mathcal{P}_{1}\) (post-processing error correction) and \(\mathcal{P}_{2}\) (pretrained LLM-based rescoring).
**Airline Travel Information System (ATIS)**[45] contains \(4978\) training and \(893\) utterances. ATIS comprises spoken queries for air travel information, such as flight times and availability.
**Wall Street Journal (WSJ)**[46] consists of transcribed audio recordings of read news articles from the Wall Street Journal, covering a wide range of topics and featuring a diverse set of speakers. We adapt the pretrained conformer
Figure 3: Queries (Q) and responses (R) for N-best evaluation and correction by task-activating prompting (TAP) of LLMs
model on the development set of _train-si284_ and test on _93dev_.
### \(\mathcal{P}\)ipeline 1 Results
As shown in Table 1, we first use a pretrained LLM for error correction, using the setup in [9] to improve hypothesis quality as measured by oracle (minimum achievable) N-best error rates.
Figure 4 shows how error correction by LLMs complements existing rescoring with adapted LMs, such as RescoreBERT [31]. We observe that the existing rescoring pipeline [31] reduces WER from \(11.3\%\) to \(8.7\%\) compared to its fine-tuning-only baseline. Furthermore, \(\mathcal{P}\)ipeline 1 employed the frozen pretrained language models to achieve an additional performance boost.
### \(\mathcal{P}\)ipeline 2 Results
**Case 1: Zero-shot learning.** Table 3 shows results for rescoring with in-context learning using different LLMs, as well as various baselines. Note that for in-context rescoring, we extract LM scores from the model output responses, which differs from the standard approach of using the softmax outputs from the rescoring LM, for which we also report results. The best \(\mathcal{P}_{2}\) rescoring setup is the one using InstructGPT (Table 3, last row), achieving \(19.7\)% relative WER reduction compared to rescoring with a fine-tuned GPT-2. Note that the frozen GPT-2 failed to give improvements over a 4-gram baseline, showing that a certain model size is required for generative error correction. For LLMs with over \(100\) billion parameters, the use of prompting information showed better results compared to using the softmax scores directly.
Next, we tested some popular prompting variants for context learning to possibly improve the performance of \(\mathcal{P}_{2}\). As shown in Table 2, \(\mathcal{P}_{2}\) and prompting with _"step-by-step"_ (known as zero-shot reasoning in [16]) achieved the best results for both LLMs (fourth row). It outperforms the one-shot learning variant (prompting with one input-output example; fifth row) by \(1.8\%\) relative WER difference. It is worth noting that the standard rescoring training \(\mathcal{P}_{1}\) (first row) still outperforms zero-shot \(\mathcal{P}_{2}\) by \(14.6\)% relative.
**Case 2: Few-shot learning.** We gain some insight into the effects of in-context learning by considering few-shot learning in the form of conversational prompting. We feed InstructGPT examples drawn from the training portions of the two datasets, and report the results on the unseen test sets. As shown in Figure 5, we see that frozen InstructGPT improved its rescoring performance as the number of training samples is increased from \(1\) to \(12\). It is better to let the model history accumulate (green plot) than to reset it after each utterance (red plot), thereby compounding the effect of demonstrations.
**Case 3: In-domain fine-tuning.** We also obtained in-domain fine-tuning results, where we use the training portions of all the speech dataset to fine-tune the LLMs and then evaluate performance on the test sets; note that InstructGPT, being an API-only model, could not be fine-tuned. For prompting we use TAP (Section 3.2); however, we observed that after fine-tuning the exact method of prompting makes very little difference. As shown in Table 4, fine-tuning with low-rank adapters (LoRA) outperforms full fine-tuning in the generative error correction case, as do residual adapters. One reason would be that adapters avoid modifying the parameters
\begin{table}
\begin{tabular}{l c c|c|c} \hline \hline \(\mathcal{P}_{1}\): correction setup & WSJ & ATIS & WER\({}_{avg}\) & M-Size \\ \hline (a) \(N\)-best & 9.78 & 6.43 & 8.11 & - \\ \hline (a) + corrected by GPT-2 & 9.91 & 6.11 & 8.01 & 1.5B \\ (a) + corrected by OpenLLaMA & 9.95 & 5.73 & 7.43 & 13B \\ (a) + corrected by BLOOM & 9.21 & 5.64 & 7.42 & 176B \\ (a) + corrected by InstructGPT & **8.41** & **5.43** & **6.92** & 175B \\ \hline \hline \end{tabular}
\end{table}
Table 1: Oracle WERs for original and error-corrected N-best output, using \(\mathcal{P}_{1}\) processing as shown in Figure 1(a). The oracle error rates show the improvement in hypothesis quality as a result of post-processing using different sizes of LLMs.
Figure 4: \(\mathcal{P}_{1}\) ASR rescoring (RS) training using hypotheses corrected by LLM. The dashed red line marks the \(N\)-best WER. The WER gradually decreases in the three stages of rescoring using our \(\mathcal{P}_{1}\) processing: Stage 0, \(N\)-best hypothesis with LLM correction (\(N\)C); Stage 1, fine-tuned RescoreBERT [31] using the masked language modeling (MLM) loss; and Stage 2, MWER training.
Figure 5: WER results on ATIS and WSJ with few-shot learning based on InstructGPT, for increasing numbers of demonstration samples. “One-by-one prompting” resets the model history after each utterance, “in-context prompting” lets the history (and thus the examples provided) accumulate.
of a pretrained model (by inserting a neural module with a small number of additional trainable parameters that approximate the full parameter updates), allowing for efficient learning of the task without affecting the pretrained parameters of the LLM. _LoRA_-based generative error correction introduces trainable low-rank decomposition matrices into the pretrained LLM layers, enabling the model to adapt to new data while keeping the original LLMs fixed to retain the pretrained knowledge. Specifically, LoRA performs a reparameterization of each model layer expressed as a matrix multiplication by inserting low-rank decomposition matrices. As a result, the representations (generated by the LLM) are not distorted due to task-specific tuning, while the adapter module acquires the ability to perform the error correction task.
Compared to previous state-of-of-the-art results with the Universal Speech Model (USM) [47] with text injection training over 2B parameters, our best fine-tuning results improve upon their WSJ results (WER of 3.2%). This is noteworthy considering that our generative error correction method is based on a smaller underlying conformer-RNN-T ASR system.
Another parameter-efficient form of fine-tuning is prefix tuning [50], where a continuous prompt prefix is inserted into the input and tuned to optimize task performance of the LLM. However, this method gave worse results than the full or adapter-based fine-tuning methods for the larger LLMs.
## 5 Conclusions
We have explored how in-context learning can be applied to pretrained large language models for improving first-pass ASR N-best output _without fine-tuning_. For this task setting, we introduce two post-processing pipelines utilizing in-context learning. The first one uses a pretrained LLM for error correction prior to standard rescoring with a fine-tuned LM. The second pipeline uses in-context learning by prompting, to instruct the frozen pretrained LLM to perform the rescoring task by itself. The latter method shows substantial gains over the first-pass ASR output and can be further enhanced with chain-of-thought and example prompting, as well as a new prompting scheme we call task-activating prompting. The best methods show 31% (on ATIS) to 38% (on WSJ) WER reduction over first-pass ASR, using a frozen InstructGPT, and better than with a fine-tuned GPT-2 LM. Substantial additional gains are achieved by fine-tuning the LLM for the ASR output-correction task. Post-processing with OpenL-LaMA and LoRA fine-tuning achieves 86% and 80% WER reduction on ATIS and WSJ, respectively. These results are **below** the N-best oracle error rate, showing the LLM's ability to utilize prelearned knowledge to correct ASR output errors. Possible future work can look at how to integrate extra acoustic representations into pretrained LLMs for further enhancing generative ASR error correction.
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c|}{WSJ} & \multicolumn{3}{c}{ATIS} \\ Method & GPT-2 & 11.124 & BLOM & GPT-2 & LLM-1 & BLOM \\ \hline FT + ranking [11] & 9.93 & 8.09 & 8.42 & 6.34 & 3.71 & 3.75 \\ \hline Full fine-tune & 9.94 & 7.71 & 6.91 & 5.98 & 2.23 & 2.49 \\ Res. adapter [48] & **7.24** & 5.94 & 4.57 & **4.45** & 2.48 & 2.12 \\ LoRA [49] & 7.52 & **2.11** & **2.81** & 4.57 & **1.69** & **1.97** \\ Prefix tuning [50] & 9.32 & 6.99 & 7.43 & 5.32 & 2.63 & 2.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: WERs on ATIS and WSJ, using fine-tuning (FT) and parameter-efficient adaptation to enhance the \(\mathcal{P}_{2\text{-}\text{TAP}}\) pipeline
\begin{table}
\begin{tabular}{l c|c c} \hline \hline & WSJ & ATIS \\ \hline In-context learning variant & InstructGPT & BLOOM & InstructGPT & BLOOM \\ \hline \(\mathcal{P}_{1}\): LLM-corrected \(N\)-best w/ RescoreBERT [31] & 10.13 & 10.46 & 7.13 & 8.46 \\ \hline \(\mathcal{P}_{2}\): (c) Zero-shot scoring & 10.43 & 11.23 & 7.95 & 8.45 \\ \(\mathcal{P}_{2}\): (c) + zero-shot reasoning [16] & 10.20 & 11.88 & 7.77 & 8.53 \\ \(\mathcal{P}_{2}\): (c) + domain-hint prompting [2] & 10.98 & 11.45 & 7.59 & 8.49 \\ \hline \(\mathcal{P}_{2}\): (d) Scoring with one example-pair & 9.42 & 9.45 & 6.34 & 7.30 \\ \(\mathcal{P}_{2}\): (d) + zero-shot reasoning [16] & 9.87 & 11.46 & 7.25 & 8.64 \\ \(\mathcal{P}_{2}\): (d) + domain-hint prompting [2] & 9.70 & 10.99 & 6.19 & 7.12 \\ \(\mathcal{P}_{2}\): (d) + task-activating prompting (TAP) & **8.84** & **8.99** & **5.79** & **6.72** \\ \hline \hline \end{tabular}
\end{table}
Table 2: WERs on ATIS and WSJ using prompting variants to enhance the \(\mathcal{P}_{2}\) in-context learning pipeline. We report the results of InstructGPT and BLOOM as LLMs over 100B; GPT-2 and OpenLLaMA do not perform consistently in this setting.
\begin{table}
\begin{tabular}{l c c} \hline \hline \(\mathcal{P}_{2}\): zero-shot rescoring setup & WSJ & ATIS \\ \hline (a) Oracle & 9.78 & 6.43 \\ (b) First pass & 11.87 & 8.82 \\ \hline (b) + \(4\)-gram LM & 11.21 & 8.57 \\ \hline (b) + frozen GPT-2 & 29.56 & 27.51 \\ (b) + frozen GPT-2 _w/ TAP_ & 27.37 & 27.59 \\ \hline (b) + frozen OpenLLaMA & 13.32 & 9.27 \\ (b) + frozen OpenLLaMA _w/ TAP_ & 11.53 & 8.61 \\ \hline (b) + frozen BLOOM & 12.59 & 9.21 \\ (b) + frozen BLOOM _w/ TAP_ & **10.82** & **8.42** \\ \hline (b) + frozen InstructGPT & 9.97 & 7.15 \\ (b) + frozen InstructGPT _w/ TAP_ & **8.72** & **6.39** \\ \hline \hline \end{tabular}
\end{table}
Table 3: WERs with \(\mathcal{P}_{2}\) pipeline, using LLM in-context learning (ICL) for rescoring by zero-shot prompts illustrated in Figure 2(a). Where indicated we use task-activating prompting (TAP) to condition the LLM. |
2310.00441 | Exploiting Human Color Discrimination for Memory- and Energy-Efficient
Image Encoding in Virtual Reality | Virtual Reality (VR) has the potential of becoming the next ubiquitous
computing platform. Continued progress in the burgeoning field of VR depends
critically on an efficient computing substrate. In particular, DRAM access
energy is known to contribute to a significant portion of system energy.
Today's framebuffer compression system alleviates the DRAM traffic by using a
numerically lossless compression algorithm. Being numerically lossless,
however, is unnecessary to preserve perceptual quality for humans. This paper
proposes a perceptually lossless, but numerically lossy, system to compress
DRAM traffic. Our idea builds on top of long-established psychophysical studies
that show that humans cannot discriminate colors that are close to each other.
The discrimination ability becomes even weaker (i.e., more colors are
perceptually indistinguishable) in our peripheral vision. Leveraging the color
discrimination (in)ability, we propose an algorithm that adjusts pixel colors
to minimize the bit encoding cost without introducing visible artifacts. The
algorithm is coupled with lightweight architectural support that, in real-time,
reduces the DRAM traffic by 66.9\% and outperforms existing framebuffer
compression mechanisms by up to 20.4\%. Psychophysical studies on human
participants show that our system introduce little to no perceptual fidelity
degradation. | Nisarg Ujjainkar, Ethan Shahan, Kenneth Chen, Budmonde Duinkharjav, Qi Sun, Yuhao Zhu | 2023-09-30T17:28:59Z | http://arxiv.org/abs/2310.00441v1 | Exploiting Human Color Discrimination for Memory- and Energy-Efficient Image Encoding in Virtual Reality
###### Abstract.
Virtual Reality (VR) has the potential of becoming the next ubiquitous computing platform. Continued progress in the burgeoning field of VR depends critically on an efficient computing substrate. In particular, DRAM access energy is known to contribute to a significant portion of system energy. Today's framebuffer compression system alleviates the DRAM traffic by using a numerically lossless compression algorithm. Being numerically lossless, however, is unnecessary to preserve perceptual quality for humans. This paper proposes a perceptually lossless, but numerically lossy, system to compress DRAM traffic. Our idea builds on top of long-established psychophysical studies that show that humans cannot discriminate colors that are close to each other. The discrimination ability becomes even weaker (i.e., more colors are perceptually indistinguishable) in our peripheral vision. Leveraging the color discrimination (in)ability, we propose an algorithm that adjusts pixel colors to minimize the bit encoding cost without introducing visible artifacts. The algorithm is coupled with lightweight architectural support that, in real-time, reduces the DRAM traffic by 66.9% and outperforms existing framebuffer compression mechanisms by up to 20.4%. Psychophysical studies on human participants show that our system introduce little to no perceptual fidelity degradation.
+
Footnote †: _ASPLOS ’24, April 27-May 1, 2024, La Jolla, CA, USA_
© 2024 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0372-0/24/04.
[https://doi.org/10.1145/3617232.3624860](https://doi.org/10.1145/3617232.3624860)
+
Footnote †: _ASPLOS ’24, April 27-May 1, 2024, La Jolla, CA, USA_
© 2024 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0372-0/24/04.
[https://doi.org/10.1145/3617232.3624860](https://doi.org/10.1145/3617232.3624860)
+
Footnote †: _ASPLOS ’24, April 27-May 1, 2024, La Jolla, CA, USA_
## 1. Introduction
Virtual Reality (VR) has the potential of becoming the next ubiquitous computing platform, after PCs and smartphones, revolutionizing a wide variety of domains such as healthcare (Cheng et al., 2017), education (Cheng et al., 2017), remote communication (Zhu et al., 2018; Zhu et al., 2018), professional training (Zhu et al., 2018), and industrial design (Zhu et al., 2018).
Continued progress in the burgeoning field of VR depends critically on an efficient computing substrate, driven by the ever-growing requirement of immersive user experience and the miniaturization of device form factors. DRAM communication energy is known to contribute significantly to the system energy consumption. Recent studies show that DRAM energy alone can consume upward of 30% of the total system energy consumption during VR video rendering (Zhu et al., 2018; Zhu et al., 2018). The DRAM bottleneck will only become worse in the future with users' questions for higher resolution and frame rate.
An effective approach to reduce DRAM traffic is framebuffer compression, which is universally implemented in modern mobile SoCs for compressing any traffic in and out of the DRAM. A classic example is the Arm Frame Buffer Compressions (AFBC) technology, which is now in almost all of Arm's GPU, Video Codec, and Display Controller IPs (Cheng et al., 2017).
**Idea.** Today's framebuffer compression algorithm is numerically lossless. Being numerically lossless is, however, unnecessary to preserve perceptual fidelity: more compression opportunities arise when we turn our attention to _perceptual lossless_. Long-established psychophysical studies show that humans cannot discriminate colors that are close to each other (Zhu et al., 2018; Zhu et al., 2018). Informally, this means that many colors, while differing in RGB values, are perceptually indistinguishable and thus can be encoded together -- a previously under-exploited opportunity for real-time image encoding.
Critically, the discrimination ability becomes even weaker (i.e., more colors are indistinguishable) in our _peripheral_ vision as objects move away from fixation (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2018). The
eccentricity-dependent weakening of color discrimination provides further opportunities for DRAM traffic compression: VR displays, to provide immersive experiences, have a wide Field-of-View (FoV) of about \(100^{\circ}\); above 90% of a frame's pixels are in the peripheral vision (outside 20 ) (Kang et al., 2019; Wang et al., 2020).
**Design.** Leveraging the unique color discrimination (in)ability of human visual system, we propose a new image compression algorithm for immersive VR systems. We precisely formulate the color perception-aware encoding as a constraint optimization problem. The formulation is non-convex and requires iterative solvers that are not amenable to real-time execution. Leveraging empirical observations of human color discrimination abilities, we introduce a set of principled relaxations, which transform the compression problem into a convex optimization with an analytical solution.
The analytical solution, while avoiding iterative solvers, is still compute intensive and slow to execute in real-time. Implemented as a GPU shader executing on the Adreno 650 GPU in Oculus Quest 2, a widely used mobile VR headset, the compression algorithm runs in a mere 2 FPS. We propose lightweight hardware extensions for our encoding and decoding algorithms. The new hardware exploits the inherent task-level and pipeline-level parallelisms in the algorithms and can be readily combined with existing Base-Delta (BD) encoding without changing the decoding hardware at all.
**Results.** We implement our architectural extensions in RTL and synthesize the design using a TSMC 7 nm process node. The compression algorithm reduces the memory traffic by 66.9% compared to uncompressed images and by up to 20.4% compared to the state-of-the-art real-time frame-buffer compression (Wang et al., 2020). We conduct IRB approved human subject study on 11 participants. Results suggest that our compression algorithm brings little visible artifacts to users. In summary, this paper makes the following contributions:
* We propose an image encoding scheme to reduce DRAM traffic in mobile VR systems. The scheme leverages the eccentricity-dependent color discrimination (in)ability of human visual systems.
* We show that the new encoding scheme can be formulated as a convex optimization problem with an analytical solution.
* We propose lightweight and modular hardware support to enable real-time encoding.
* ASIC synthesis and human subject studies show that the new encoding scheme reduces the DRAM traffic by 66.9% with little to no subjective perceptual quality degradation.
The rest of the paper is organized as follows. Sec. 2 introduces the background. Sec. 3 describes our key compression algorithm. Sec. 4 introduces the co-designed hardware architecture. Sec. 5 discusses the experimental methodology, followed by the evaluation results in Sec. 6. We relate our work to prior art in Sec. 7 and conclude in Sec. 8.
## 2. Background and Motivation
We first introduce the background of human color perception and its eccentricity dependence, which form the psychophysical basis for our compress algorithm (Sec. 2.1). We then describe today's real-time frame compression algorithm, which forms an important baseline for our algorithm (Sec. 2.2).
### Eccentricity-Dependent Color Perception
**Colors and Color Spaces**. In a typical rendering pipeline, a color is usually described in the linear RGB space with three channels; each channel is a floating point number between 0 and 1. For output encoding, each channel in the linear RGB color space is transformed to the common sRGB color space, where each channel is an 8-bit integer between 0 and 255. This transformation is non-linear, called gamma encoding, and is described by the following function \(f_{\delta 2r}\), where \(x\in[0,1]\) represents a linear RGB channel value (Kang et al., 2019; Wang et al., 2020):
\[f_{\delta 2r}(x):=\begin{cases}\lfloor 12.92x\rfloor&x\leq 0.0031308\\ \lfloor 1.055x^{1/2.4}-0.055\rfloor&x>0.0031308\end{cases} \tag{1}\]
Psychophysical studies on color discrimination commonly operate in the DKL color space (Kang et al., 2019; Wang et al., 2020), mainly because the DKL space models the opponent process in the human visual system. The DKL space is a linear transformation away from the linear RGB color space:
\[[R,G,B]^{T}=\text{M}_{\text{RGB2DKL}}[K_{1},K_{2},K_{3}]^{T} \tag{2}\]
where \([R,G,B]\) is the color in the linear RGB space, \([K_{1},K_{2},K_{3}]\) is the color in the DKL space, and \(\text{M}_{\text{RGB2DKL}}\) is a \(3\times 3\) constant matrix (with the same coefficients, \([[0.14,0.17,0.00],[-0.21,\\ -0.71,-0.07],[0.21,0.72,0.07]]\), as in Duinkharjav et al. (2020)).
**Color Discrimination.** It is well-established that humans can not discriminate between colors that are close to each other (Wang et al., 2020; Wang et al., 2020). For instance, Fig. 1 shows four colors that have different sRGB values but appear to be the same.
More formally, given a reference color \(\kappa\), there exists a _set_ of colors \(\mathcal{E}_{\kappa}\), in which all the colors are perceptually indistinguishable from \(\kappa\). In a linear color space such as DKL and RGB, the set of equi-appearance colors in \(\mathcal{E}_{\kappa}\) form an _ellipsoid_, whose center is \(\kappa\)(Kang et al., 2019). In the literature, such an ellipsoid is called a _discrimination ellipsoid_(Kang et al., 2019).
**Eccentricity Dependence.** Critically, human color discrimination ability is weaker in the peripheral vision (Kang et al., 2019; Wang et al., 2020).
Figure 1. Human visual system can not discriminate colors that close to each other. These four colors differ in tristimulus values, but appear to be the same color.
That is, for a color \(\kappa\), its discrimination ellipsoid \(\mathcal{E}_{\kappa}\) is larger, i.e., includes more indistinguishable colors, as \(\kappa\) moves away from one's fixation. Fig. 2 shows two figures that plot the discrimination ellipsoids under a \(5\lx@math@degree\) and a \(25\lx@math@degree\) eccentricity, respectively, in the linear RGB color space. Eccentricity is the angle from the center of the retina, a.k.a., current fixation or "fovea". The ellipsoids in the \(25\lx@math@degree\) plot are larger than those in the \(5\lx@math@degree\) plot, suggesting that the color discrimination ability is weaker in peripheral vision.
Color discrimination becomes weaker in the visual periphery for three reasons. First, the receptive field (RF) sizes of Retinal Ganglion Cells (RGCs) increase with eccentricity, a result of larger dendritic fields (Gall et al., 2017; Ganglion et al., 2018) and sparser RGC density in periphery (Gall et al., 2017). A large RF means that a RGC integrates signals from a larger spatial area, leading to more blurring in the (spatial) frequency domain. Second, cone cells (which are photoreceptors responsible for vision under normal daylight) become larger in size as eccentricity increases (Gall et al., 2017), also contributing blurring in spatial frequency. Finally, the distribution of cone cells on our retina is extremely non-uniform: over \(95\%\) of the cone cells are located in the central region of the retina (i.e., fovea) with an eccentricity of below \(5\lx@math@degree\)(Gall et al., 2017; Gall et al., 2017). The density of the cone cells decreases drastically in the visual periphery, which is, thus, significantly under-sampled spatially.
The full color discrimination function \(\Phi\), expressed below, is thus parameterized by both the reference color \(\kappa\) and the eccentricity \(\mathbf{e}\):
\[\Phi:(\kappa,\mathbf{e})\mapsto(a,b,c) \tag{3}\]
where \((a,b,c)\) represents the semi-axes lengths of the discrimination ellipsoid belonging to color \(\kappa\) at an eccentricity \(\mathbf{e}\) in the DKL color space (Gall et al., 2017), a common color space for color perception experiments. Given \((a,b,c)\), \(\mathcal{E}_{\kappa}\), the discrimination ellipsoid of color \(\kappa\) in the DKL space, is given by:
\[\frac{(x-\kappa_{1})^{2}}{a^{2}}+\frac{(y-\kappa_{2})^{2}}{b^{2}}+\frac{(z- \kappa_{3})^{2}}{c^{2}}=1 \tag{4}\]
where \((\kappa_{1},\kappa_{2},\kappa_{3})\) represent the three channels of the color \(\kappa\).
The function \(\Phi\) can be implemented using a Radial Basis Function (RBF) network (Kang et al., 2017), which is extremely efficient to implement on GPUs in real time. In our measurement on Oculus Quest 2 VR headset using Oculus' OVR Metrics Tool (Culus, 2018), evaluating RBF network runs in 72 FPS, matching the display refresh rate while consuming sub 1 mW power.
AR and VR headsets, in providing an immersive experience, usually have a wide FoV that is above \(100\lx@math@degree\). Therefore, the vast majority of the pixel colors will fall in the peripheral vision. The eccentricity-dependent color discrimination (in)abilities of human visual system gives opportunities to better image compression that this paper exploits.
### Real-Time Frame Compression
**DRAM Traffic.** A variety of data communication traffics occur on a VR system, as illustrated in Fig. 3, such as the traffic through DRAM, the display interface, and the wireless communications with a remote rendering server. This paper focuses on reducing the DRAM traffic, which occurs when the different Intellectual Property (IP) blocks in the SoC communicate with each other during rendering.
Each frame, the GPU writes the frame data to the frame buffer in the GPU, which are then read by the display controller. It is these DRAM traffics (i.e., GPU \(\leftrightarrow\) frame buffer \(\leftrightarrow\) DRAM controller) that this paper focuses on reducing. When rendering a VR (\(360\lx@math@degree\)) video, additional DRAM traffics occur between the network interface controller, the video codec, and the GPU (Gall et al., 2017). While not explicitly targeted in this paper, these traffics can also potentially be reduced by our compression algorithm, especially in scenarios where
Figure 2. Color discrimination is eccentricity dependent. The discriminative ability is weaker as the eccentricity increases. As a result, the sizes of the discrimination ellipsoids increase with the eccentricity. The two plots on the right show the discrimination ellipsoids under a \(5\lx@math@degree\) and a \(25\lx@math@degree\) eccentricity, respectively, in the linear RGB color space (i.e., sRGB normalized to \([0,1]\) without gamma (Kang et al., 2017; Ganglion et al., 2018)). The discrimination ellipsoids in each plot are shown for 27 colors uniformly sampled in the linear RGB color space between [0.2, 0.2, 0.2] and [0.8, 0.8, 0.8].
remetely rendered frames are transmitted one by one (rather than as a video) (Shanhan et al., 2017; Wang et al., 2018).
Reducing DRAM traffic is critical. It is well-established that data transfer and memory access energy is known to far out-weigh the energy consumption of computation. For instance, compared to a Multiple-Accumulate (MAC) operation on 1-Byte fixed-point data, transferring one Byte of information through DRAM consumes 800 \(\times\)(Shanhan et al., 2017; Wang et al., 2018) higher energy. Reducing DRAM traffic in a visual computing system has been a main research focus in recent years (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018).
**Framebuffer Compression Algorithms.** An effective and commonly used approach to reduce DRAM traffic in a rendering system is framebuffer compression, which compresses and uncompresses every frame in and out of the DRAM. To ensure a low _per-frame_ latency, compression in VR must be done on a per-frame basis, precluding video compression methods such as H.265/VP5, which necessarily require buffering a sequence of frames before compression (Wang et al., 2018; Wang et al., 2018). Offline image compression methods such as JPEG and PNG are rarely used in framebuffer compression as they are too compute-intensive. For instance, JPEG requires chroma subsampling, transforming images to a frequency space followed by quantization and Huffman encoding (Wang et al., 2018).
Today's framebuffer compression methods universally use a much faster base+delta (BD) strategy. Fig. 4 uses a simple example to illustrate the basic idea behind BD, which compresses each color channel and each pixel tile individually. The tile size in Fig. 4 is 4\(\times\)4. In each tile, BD chooses a base pixel and then calculates the \(\Delta\)s/offsets between all other pixels and the base pixel. In the example of Fig. 4, the base pixel is the first pixel. The \(\Delta\)s will necessarily have smaller magnitudes compared to the original pixel values and, thus, require fewer bits to encode.
The BD compression algorithm is lightweight: it works completely in the image space, as opposed to the frequency domain which requires an additional, compute-intensive transformation (e.g. Fast Fourier Transform or Discrete Cosine Transformation); it requires only fixed-point addition arithmetics; it is also embarrassingly parallel. Therefore, the basic BD strategy is universally implemented in today's mobile SoCs for compressing any traffic in and out of the DRAM. A classic example is the Arm Frame Buffer Compressions (AFBC) technology, which is now in almost all of Arm's GPU, Video Codec, and Display Controller IPs (Bradbury et al., 2018).
## 3. Color Perception-Aware Compression
This section introduces a color perception-aware image encoding and decoding algorithm. We start by describing the high-level ideas (Sec. 3.1), followed by a precise problem formulation in the form of constraint optimization (Sec. 3.2). We then show how this optimization problem has an analytical solution when relaxed to a convex problem (Sec. 3.3). We then describe the full compression algorithm (Sec. 3.4).
### Key Ideas
The basic BD algorithm is numerically lossless. Our observation is that numerically lossless compression is unnecessary to preserve perceptual equivalence -- because of the inherent the color discrimination (in)ability of human visual system.
**Intuition.** The basic BD algorithm encodes all the \(\Delta\)s in a tile (off of a base pixel) rather than the original pixel values. Thus, to improve the compression ratio over BD we must reduce the magnitude of the \(\Delta\)s, which, intuitively, requires bringing pixels _more similar_ to each other.
Under a numerically lossless constraint, however, the \(\Delta\)s between pixels are fixed. Our idea is to relax the constraint from numerical lossless to _perceptually lossless_. In this way, we could adjust pixel color values, as long as each pixel color does not go beyond its discrimination ellipsoid, to minimize the total number of bits required to encode the \(\Delta\)s. This encoding is numerically lossy as we intentionally change the color values, but will preserve the perceptual quality.
**An Example.** More concretely, consider the example in Fig. 5, which shows 16 pixels in a tile on an axis. The number of bits required to encode the entire tile is (ignoring any metadata for now):
\[B=B_{0}+N\times B_{D} \tag{5}\] \[B_{0}=8,N=15,B_{D}=\lfloor log_{2}(Max-Min+1)\rfloor \tag{6}\]
Figure 4. Base + Delta (BD) compression, which works in the sRGB color space. For each pixel tile (4\(\times\)4 here), we find a base pixel (95 here), and calculate the \(\Delta\) of all other pixels from the base pixel. The \(\Delta\) are smaller in magnitude and thus require fewer bits to encode. The same compression strategy is applied to all three color channels.
Figure 3. Different types of data communication traffic in a VR system. This paper focuses on reducing DRAM traffic.
where \(B_{0}\) being 8 denotes that we need 8 bits to encode a base pixel (assuming the common 8-bit per-channel encoding), and \(N\) being 15 denotes that there are 15 other pixels. \(B_{D}\) denotes the number of bits required to encode the \(\Delta\) of each of the 15 non-base pixels.
The minimum value of \(B_{D}\) occurs when the base pixel is chosen to be within \([Min,\,Max]\), in which case \(B_{D}=[log_{2}(Max-Min+1)]\). This is because the number of bits to encode each \(\Delta\) must be the same1, so we must accommodate the _largest possible_\(\Delta\), which is the difference between the maximum and minimum pixels in the tile. Therefore, to improve compression ratio we must reduce \((Max-Min)\).
Footnote 1: It is possible, but uncommon, to vary the number of bits to encode the \(\Delta\)s in a tile with more hardware overhead. Following prior work (Sutton et al., 2017), this paper assumes that one single bit-length is used to encode all \(\Delta\)s in a tile. We consider variable bit-length an orthogonal idea to this paper.
The bottom example in Fig. 5 illustrates what would happen when we relax the compression constraint to be perceptually lossless. The adjusted pixel values deviate from the original values, but as long as they still within the respective ellipsoids, \((Max-Min)\) is reduced without affecting perceptual quality.
It is worth noting that to obtain the highest compression rate it is necessary to adjust interior pixels, as is the case in this example. The central challenge we address in this paper is how to design a principled algorithm that maximizes the bit reduction while being lightweight to execute in real time.
### Problem Formulation
Our compression algorithm works on top of the baseline BD algorithm. Our goal is to adjust pixel colors to minimize the bit-length required to encode the \(\Delta\)s in a tile. The adjusted pixel tile then goes through any existing BD compression method. Critically, color adjustment must not violate the perceptual constraints. Therefore, we formulate our compression as a constraint optimization problem:
(7a) \[\operatorname*{argmin}_{\mathbf{p}} \sum_{\mathrm{C}\in\{R,G,B\}}log_{2}\lfloor max\{f_{s2r}( \mathbf{p}^{\mathrm{C}})\}-min\{f_{s2r}(\mathbf{p}^{\mathrm{C}})\}+1\rfloor,\] (7b) \[\text{where }\mathbf{p}\coloneqq[p_{0},\ p_{1},\ \cdots,\ p_{N-1}],\] (7c) \[\mathbf{p}^{\mathrm{C}}\coloneqq[p_{0}^{\mathrm{C}},\ p_{1}^{ \mathrm{C}},\ \cdots,\ p_{N-1}^{\mathrm{C}}],\ \mathrm{C}\in\{R,G,B\}\] (7d) \[s.t. \forall p_{i}\in\mathbf{p}\ p_{i}\in\mathcal{E}_{p_{i}}\] (7e)
where \(\mathbf{p}\) is the optimization variable, which is the collection of \(N\) pixels in a tile (Equ. 7b); \(p_{i}^{\mathrm{C}}\) denotes channel C (R, G, or B) of \(i\)-th pixel in the linear RGB space.
The constraints (Equ. 7d) provide the (convex) ellipsoid boundary for each pixel to move while maintaining perception quality. \(f_{s2r}(\cdot)\) is the non-linear transformation from RGB to sRGB space (Sec. 2.1), which is ultimately where bit encoding takes place. The objective function (Equ. 7a) minimizes the bit cost for encoding the \(\Delta\)s across all channels (it is a constant cost to encode the base pixel, e.g., 8 in the common sRGB encoding). This optimization formulation is applied to each pixel tile independently.
Unfortunately, this optimization problem is impractical to be solved in real-time, because the objective function is non-convex due to the non-linearity of min, max, floor, and \(f_{s2r}(\cdot)\). Empirically, we also find that the popular solvers in Matlab spend hours while still being stuck in local optima.
**Relaxation.** We introduce two relaxations that turn the problem into a convex optimization. Critically, while general convex optimization requires iterative solvers (e.g., gradient descent or Newton's method (Kolmogorov, 1954)), our relaxed problem is one such that it has an analytical solution. The relaxations keep the same constraints as before (Equ. 7d) and, thus, still enforce the perceptual quality.
The first relaxation is based on the empirical observation that most discrimination ellipsoids are elongated along the either the Red or the Blue axis. See the discrimination ellipsoids in Fig. 2 for an illustration. This makes sense as human visual perception is most sensitive to green lights (Sutton et al., 2017; Sutton et al., 2017) and, thus, has the least "wiggle room" along the Green axis.
Our idea thus is to, instead of minimizing the bit costs across _all_ three axes, minimize along _only_ the Red or the Blue axis (while still having the flexibility of adjusting all the channels of all the pixels in a tile). Using the Blue axis an example, this relaxation yields following new objective function in Equ. 8a:
(8a) \[\operatorname*{argmin}_{\mathbf{p}} log_{2}\lfloor max\{f_{s2r}(\mathbf{p}^{B})\}-min\{f_{s2r}(\mathbf{p}^{B})\}+1\rfloor,\] (8b) \[\Rightarrow \operatorname*{argmin}_{\mathbf{p}} max\{f_{s2r}(\mathbf{p}^{B})\}-min\{f_{s2r}(\mathbf{p}^{B})\},\] (8c) \[\overset{\rightharpoonup}{\Rightarrow} \operatorname*{argmin}_{\mathbf{p}} max\{\mathbf{p}^{B}\}-min\{\mathbf{p}^{B}\}.\] (8d)
Figure 5. An intuition illustration of our perceptual-aware compression, where pixel values are adjusted to be more similar to each other by leveraging the inherent human color discrimination thresholds.
Second, the objective function in Equ. (a) can be transformed to Equ. (b) without sacrificing solution optimality, because \(log_{2}[\cdot]\) is monotonically non-decreasing. We then remove the non-linear RGB to sRGB transformation function \(f_{2rg}(\cdot)\). This removal does not preserve the solution optimality, but gives us a convex objective function in Equ. (c).
**Proof of Convexity.** Let the objective function \(max\{\mathbf{x}\}-min\{\mathbf{x}\}\) be \(g(\mathbf{x}):\mathbb{R}^{N}\rightarrow\mathbb{R}\). To prove \(g(\mathbf{x})\) is convex, we must show: \(\forall\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{R}^{N}\) and \(t\in[0,1]\), \(g(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\leq tg(\mathbf{x}_{1})+(1-t)g(\mathbf{ x}_{2})\).
Proof.: Observe that: \(g(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\coloneqq max(t\mathbf{x}_{1}+(1-t) \mathbf{x}_{2})-min(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\).
We know \(max(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\leq max(t\mathbf{x}_{1})+max((1-t) \mathbf{x}_{2})=t\ max(\mathbf{x}_{1})+(1-t)\ max(\mathbf{x}_{2})\). Similarly we can derive: \(min(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\geq t\ min(\mathbf{x}_{1})+(1-t)\ min( \mathbf{x}_{2})\).
Therefore, \(g(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})\leq(t\ max(\mathbf{x}_{1})+(1-t)\ max( \mathbf{x}_{2}))-(t\ min(\mathbf{x}_{1})+(1-t)\ min(\mathbf{x}_{2}))=tg( \mathbf{x}_{1})+(1-t)g(\mathbf{x}_{2})\).
### Analytical Solution Intuition
The relaxations introduced before lead to an analytical solution without requiring iterative solvers. Observe that the objective function in Equ. (c) minimizes the difference between the maximum and minimum values along the Blue axis. To achieve that, the intuition is that we must move the colors closer to each other along the Blue axis while making sure the adjusted colors stay within the respective discriminative ellipsoids.
Exactly how to move the colors falls into two cases. Fig. 6 illustrates the two cases using two examples. Without losing generality, we choose to optimize along the Blue axis in these examples (the case along the Red axis is in principle the same), and we plot the projection of the ellipsoids onto the B-G plane for better visualization.
In the first case (Fig. (a)a), there is no single plane that cuts across all ellipsoids. This is because the Lowest of the Highest points of all ellipsoids (LH) is lower than the Highest of the Lowest points of all ellipsoids (LH). The optimal strategy is to move all the colors higher than HL toward HL and move all the colors lower than LH toward LH. The movement is necessarily executed along the _extrema vector_, which is the vector that connects the highest and the lowest point of an ellipsoid. After the adjustment, the Blue channels across all the pixels are either HL or LH. That is, the maximum \(\Delta\) along the Blue axis is now HL - LH, which is the smallest gap we can get the Blue channels to be without going outside the ellipsoid boundaries.
In the second case (Fig. (b)b), there is a common plane (\(P\)) that cuts across all four ellipsoids. In fact, there are infinitely many such planes, because LH is higher HL; thus, any plane between LH and HL will cut across all ellipsoids. In this case, we can simply pick any such plane and move all the colors to that plane. For the simplicity of implementation, we choose the average of the LH and the HL planes as the common plane and move colors along the extrema vectors. In this way, the Blue channel value is exactly the same for all pixels, requiring no \(\Delta\) bit for the Blue channel.
### Overall Compression Algorithm
We illustrate how our color adjustment algorithm fits in the overall rendering and compression pipeline in Fig. 7. Our adjustment algorithm takes as inputs a tile of pixels (each with three channels) and the parameters of their corresponding discrimination ellipsoids. The algorithm generates the perceptually-adjusted pixel tile as the output. We apply the same color adjustment strategy along both the Blue and the Red axis for each tile, and pick the better one in the end.
It is worth noting that our algorithm does not directly perform compression in itself; it simply adjusts pixel colors so that the (numerically lossless) BD encoding later can achieve higher compression rate. Specifically, the adjusted pixel tile will be first transformed from the linear RGB to the sRGB space, which then goes through the usual BD compression.
**Ellipsoid Transformation.** The first step in our algorithm is to transform the discrimination ellipsoids from the DKL space to the linear RGB space, which is where color
Figure 6. The two cases in adjusting color values to minimize the \(\Delta\) along the Blue axis. For simplicity, we draw the ellipsoids in the B-G plane. The empty markers \((C_{0},C_{1},C_{2},C_{3})\) denote the original colors and the solid markers \((C^{\prime}_{0},C^{\prime}_{1},C^{\prime}_{2},C^{\prime}_{3})\) denote the adjusted colors. In both cases, colors are adjusted along the extrema vector \(\mathbf{V}\).
adjustment takes place (Sec. 3.3). While ellipsoids are axis-aligned in the DKL color space [22], they will not be axis-aligned after the linear transformation from the DKL to the RGB color space. Therefore, an ellipsoid in the linear RGB space has to take the form of a general quadric surface:
\[Ax^{2}+By^{2}+Cz^{2}+Dx+Ey+Fz+Gxy+Hyz+Izx+1=0 \tag{9}\]
Transforming an axis-aligned ellipsoid in the DKL space to an ellipsoid in the linear RGB amounts to the following matrix multiplication:
\[\begin{bmatrix}A\\ B\\ C\\ E\\ F\\ G\\ H\\ I\end{bmatrix}=\begin{bmatrix}\frac{(T\odot T)^{\top}}{0}&\mathbf{0}\\ \hline 0&T\\ \hline \begin{bmatrix}2T_{00}T_{01}&2T_{10}T_{11}&2T_{20}T_{21}\\ 2T_{01}T_{02}&2T_{11}T_{22}&2T_{21}T_{22}\\ 2T_{00}T_{02}&2T_{10}T_{12}&2T_{20}T_{22}\\ \end{bmatrix}&\mathbf{0}\\ \end{bmatrix}\times\begin{bmatrix}\begin{bmatrix}1/a^{2}t\\ 1/b^{2}t\\ 1/c^{2}t\\ -2\kappa_{1}/a^{2}t\\ -2\kappa_{2}/b^{2}t\\ -2\kappa_{3}/c^{2}t\end{bmatrix},\]
\[t=1-\left(\frac{\kappa_{1}^{2}}{a^{2}}+\frac{\kappa_{2}^{2}}{b^{2}}+\frac{ \kappa_{3}^{2}}{c^{2}}\right) \tag{10}\]
where \(T=\begin{bmatrix}T_{00}&T_{01}&T_{02}\\ \hline T_{02}&T_{11}&T_{12}\\ T_{20}&T_{21}&T_{22}\\ \end{bmatrix}\) is the constant \(\mathrm{M}_{\mathrm{RGB2DKL}}\) matrix in Sec. 2.1, \(\odot\) is element-wise product, \((\kappa_{1},\kappa_{2},\kappa_{3})\) is the color in DKL space, and \((a,b,c)\) are the semi-axis lengths of \(\kappa\)'s discrimination ellipsoids. The derivation uses basic linear transformations and is omitted here due to space constraints.
**Color Adjustment.** Once we have the ellipsoids in the linear RGB space, we can perform color adjustment, which, as illustrated in Fig. 6 and described in Sec. 3.3, is done in three steps: 1) compute the extrema, i.e., the highest and the lowest point, of each ellipsoid; 2) compute \(\mathtt{LH}\) and \(\mathtt{HL}\) based on the extrema of all ellipsoids; 3) compare \(\mathtt{LH}\) and \(\mathtt{HL}\) and move colors along extrema vectors accordingly. Step 2 and 3 are relatively straightforward, so here we focus on the mathematical details of Step 1.
Extrema along the Blue axis can be computed by taking the partial derivatives of the ellipsoid equation along the Red and Green axes:
\[\frac{dz}{dx} =2Ax+Gy+Iz+D=0 \tag{11a}\] \[\frac{dz}{dy} =Gx+2By+Hz+E=0 \tag{11b}\]
These partial derivatives give us two planes, the intersection of which is a vector \(\mathbf{v}\) that connects the two extrema. The extreme vector \(\mathbf{v}\) is calculated by taking the cross product of the normal vectors of the two planes:
\[\mathbf{v}=(2A,G,I)\times(G,2B,H) \tag{12}\]
The two extrema points \(H\) and \(L\) are then calculated by finding the intersection of \(\mathbf{v}\) and the ellipsoid:
\[\mathbf{x} :=(x_{1},x_{2},x_{3})=\mathrm{M}_{\mathrm{RGB2DKL}}\times \mathbf{v}^{T} \tag{13a}\] \[t =1/\sqrt{\frac{x_{1}^{2}}{a^{2}}+\frac{x_{2}^{2}}{b^{2}}+\frac{x_{ 3}^{2}}{c^{2}}}\] (13b) \[H =\mathrm{M}_{\mathrm{RGB2DKL}}^{-1}\times(\kappa_{1}+x_{1}t, \kappa_{2}+x_{2}t,\kappa_{3}+x_{3}t)^{T}\] \[L =\mathrm{M}_{\mathrm{RGB2DKL}}^{-1}\times(\kappa_{1}-x_{1}t, \kappa_{2}-x_{2}t,\kappa_{3}-x_{3}t)^{T} \tag{13c}\]
where \(\kappa\) is the pixel color in the DKL space, \((a,b,c)\) are DKL ellipsoid parameters, and \(\mathrm{M}_{\mathrm{RGB2DKL}}\) is the RGB to DKL transformation matrix (Sec. 2.1). We omit the derivation details due to space constraints, but the derivation amounts to a simple application of line-ellipsoid intersection and linear transformations between RGB and DKL space.
**Remarks on Decoding.** One desired byproduct of our algorithm is that it requires _no_ change to the existing frame-buffer decoding scheme -- our color adjustment algorithm simply changes the input to BD. During decoding (e.g., by the display controller), the existing BD decoder will construct the sRGB values from the BD-encoded data, which are then sent to the display. The exact BD encoding format varies across implementations and is not our focus. We assume the encoding format described in Zhang et al. [76].
Figure 7: Overview of our algorithm and how it fits in existing rendering and compression pipeline. Our algorithm takes a tile of pixels and their corresponding discrimination ellipsoid parameters, and generate an adjusted pixel tile, which then goes through existing BD encoding.
## 4. Hardware Architecture
The analytical compression algorithm, while avoiding iterative solvers, is still compute intensive and slow to execute in real-time. We implement it as a GPU shader executing on the Adreno 650 GPU in Oculus Quest 2, a widely used mobile VR headset. The compression algorithm runs in a mere 2 FPS. This section describes a lightweight hardware design that accelerates the compression algorithm. Sec. 4.1 describes how our custom hardware fits into the overall system and Sec. 4.2 describes the hardware details.
### Hardware Overview
Fig. 8 provides an overview of our architectural extension, dubbed the Color Adjustment Unit (CAU), and how CAU fits into existing mobile SoCs. The CAU executes the pixel adjustment algorithm described in Sec. 3. The CAU reads its input from an on-chip buffer, which stores the pixels and the discrimination ellipsoid parameters generated by the GPU. Following prior work (Krizhevsky et al., 2015), we assume that the GPU is responsible for generating the per-pixel discrimination ellipsoids. The generation algorithm is a lightweight RBF network (Sec. 2.1). In our measurement, the ellipsoid generation algorithm on Oculus Quest 2 runs at the maximum display refresh rate (72 FPS) while consuming less than 1 mW measured using Oculus' OVR Metrics Tool (Corba et al., 2018).
The output of the CAU enters the existing BD framebuffer encoder, which writes the encoded data to the DRAM. Any frame read out from the DRAM, e.g., by the Displayer Controller IP block when sending the frame to the display, will enter the BD decoder, which reconstructs the sRGB pixels. The figure provides a visual confirmation that our algorithm 1) works on top of, rather than replaces, BD encoding, and 2) does not change the decoding architecture.
### Color Adjustment Unit
Internally, the CAU consists of an array of Processing Elements (PEs), each of which is designed to adjust colors for _one tile_ of pixels, which in our current design is assumed to be \(4\times 4\). Each PE interfaces with a dedicated Pending Buffer, which holds all the information of the pixel tiles generated from the GPU. Having more PEs will allows the system to compressing multiples tiles simultaneously.
**Pipelining.** The PE is fully pipelined to accept a new tile every cycle. Fig. 8 illustrates the detailed architecture, which has three main phases, each of which is internally pipelined. The first phase computes the extrema. The next phases use reduction trees to calculate HL and LH from the extrema. The final phase move the colors along the extrema vector.
**Compute Extrema Blocks.** This component calculates the extrema of all the pixels in a tile, which is naturally parallelizable across pixel and, thus, has multiple parallel units, each of which is responsible for one pixel. The top-right box in Fig. 8 illustrates the microarchitecture. This is the most compute intensive block in the CAU, since it involves multiple divisions and square root operations. The division and square root hardware implements Equ. 13b, and the adder and subtractor circuit implements Equ. 13c. The DKL-RGB transformations in Equ. 13c and Equ. 13a are implemented through matrix vector multiplication executed on a \(3\times 3\) MAC array.
**Compute Planes Blocks.** The extrema calculated before enters this unit, which finds the channel value for the HL plane (maximum of the minima) and LH (minimum of the maxima) plane. We implement this stage using two reduction (comparator) trees to generate both planes simultaneously.
**Color Shift Blocks.** This block takes the original color values and the two planes as input and outputs the modified color values. This phase is control-flow heavy, as it involves
Figure 8. Illustration of the hardware support, which we dub Color Adjustment Unit (CAU) for our image encoding and how CAU interfaces with the rest of the SoC. Internally, the CAU uses an array of PEs, each of which adjust colors for one tile of pixels. CAU is fully pipelined to accept a new tile every cycle from the Pending Buffer, which receives the rendered pixels and their discrimination ellipsoids from the GPUs.
multiple condition checks, e.g., testing the relationship between a point and a plane. A custom datapath in CAU avoids much of the inefficiencies surrounding control flows that are detrimental to GPU performance. This hardware is a relatively straightforward mapping from the algorithm.
**Pending Buffer.** The Pending Buffers store intermediate pixels and their discrimination ellipsoids from the GPU before they are consumed by the CAU. Each buffer is interfaced with a dedicated PE and, thus, contains the data for all the pixel tiles to be consumed by the PE.
The buffers must be properly sized so as to not stall or starve the CAU pipeline. In order to be independent of the exact GPU microarchitecture details, we make a conservative estimation of the buffer size. In particular, we allocate enough space in the buffer such that it can hold all the pixels generated by the GPU in each CAU cycle _even if_ the GPU is fully utilized, in which case each shader core in the GPU generates 1 pixel/GPU cycle. Note that the GPU and CAU cycle times need not be the same. The number of PEs in a CAU must be properly decided so as to not stall either the GPU nor the CAU, as we will discuss in Sec. 6.1.
## 5. Experimental Methodology
### Setup
**Hardware.** We implement our encoder and decoder units in SystemVerilog and use an EDA flow consisting of Synopsys and Cadence tools with the TSMC 7 nm FinFET technology to obtain latency and area. We use Synopsys DesignWare library for a variety of RTL implementations such as the pipelined divider. Power is estimated using Synopsys PrimePX with fully annotated switching activity.
The DRAM energy is calculated using Micron's System Power Calculators (Castro et al., 2017), assuming a typical 8 Gb, 32-bit LPDDR4. On average, the DRAM access energy per pixel is estimated to be 3,477 pJ/pixel, matching prior work (Srivastava et al., 2017; Wang et al., 2018).
**Dataset and Software.** We evaluate our compression algorithm with 6 different VR scenes used in VR color perception studies (Wang et al., 2018). In each scene, each frame is rendered with two sub-frames, one for each eye. All the frames are dynamically rendered (on the GPU) at run time, i.e., the frames are neither loaded from the disk nor streamed over the network. Following the common practice in color perception studies (Wang et al., 2018; Wang et al., 2018), we keep pixels in the central 10" FoV unchanged, and apply the compression algorithm only on the rest (peripheral) pixels.
As discussed in Sec. 3.4, our algorithm works in conjunction with existing BD compression. In this paper, we assume a recent, state-of-the-art, BD algorithm described by Zhang et al. (Zhang et al., 2019), from which we obtain the final compression rate.
### Human Subject Studies
We also evaluate the perceptual quality of our compression algorithm on actual participants. We recruit 11 participants (3 female; ages between 19 and 40). None of the participants were aware of the research, the number of conditions, or the hypothesis before taking the experiments, which were approved by an Internal Review Board.
We face a dilemma in user study: the speed of the compression algorithm implemented as a GPU shader is too slow on today's mobile VR headsets (e.g., 2 FPS on Oculus Quest 2 as discussed in Sec. 4) -- the motivation behind our architectural support, but this also means we can not use a mobile VR headset for user study. Our approach is to run the user study on a tethered VR headset, HTC Vive Pro Eye, which is connected to a PC with a powerful Nvidia RTX A2000 GPU, which runs the compression algorithm at 90 FPS, sufficient for user study.
Each participant was shown the six VR scenes (20 seconds each) used in a prior study (Wang et al., 2018) in random order. To encourage and ensure that the participants actively and freely explored the scene, each participant was asked to perform a scene-specific task, such as counting the number of birds in the scene. At the end of each video, we asked the participant whether they notice any visual artifacts.
In order for participants to isolate potential artifacts introduced by our compression from other irrelevant artifacts (e.g., low resolution, aliasing in rendering), at the beginning of each test we show the participant two images on a computer display, one with and the other without our perceptual compression; see examples in Fig. 9. When participants viewed the images on the computer display, the entire frames were in their foveal vision so the color adjustment was clearly visible. In this way, we make sure the artifacts reported by users resulted from compression. This is a common practice in subjective color perception studies (Wang et al., 2018). The user study results should be seen as the _lower bound_ of the quality of compression algorithm, because the participants were aware of and thus better identification of the artifacts.
Figure 9. A pair of images without (left) and with (right) our color adjustment. The two images when viewed on a conventional computer display are visibly different, because the entirety of the images will be in the viewer’s foveal vision.
### Baselines
We compare against four baselines:
* NoCom: no compression;
* BD: existing BD compression based on Zhang et al. (Zhang et al., 2019);
* PNG: lossless compression based on the popular Portable Network Graphics (PNG), which is unsuitable for real-time DRAM compression because of its high run-time overhead even with dedicated hardware acceleration (Beng et al., 2019; Zhang et al., 2019). For instance, the commercial IPB-PNG-E FPGA-based IP core compresses an \(800\times 600\) image only at a 20 FPS (Beng et al., 2019).
* SCC: an alternative strategy to exploit color discrimination based on the Set Cover formulation, which we describe next.
SCC uses a look-up table to map each 24-bit sRGB color to a more compact encoding. This can be formulated as a set cover problem (Zhu et al., 2019): find the smallest subset of sRGB colors C \(\subset\) sRGB whose discrimination ellipsoids union-ed together cover all the \(2^{24}\) sRGB colors. Each new color is then encoded with only \(log_{2}\lceil|\text{C}|\rceil\) bits, where \(|\cdot|\) denotes the set cardinality.
The set cover problem is a classic NP-complete problem (Zhu et al., 2019), where the optimal solution requires combinatorial search. We use a common greedy heuristics (Han et al., 2017) and construct the mapping tables. The encoding table consumes 30 MB and the decoding table consumes 96 KB, too large for SCC to be used for DRAM traffic compression in mobile SoCs.
## 6. Evaluation
We first show that the area and power overhead of our compression scheme is negligible while ensuring real-time compression (Sec. 6.1). We then present the benefits of our compression scheme in DRAM traffic reduction and power savings, and analyze the sources of the savings (Sec. 6.2). We then present our human subject studies, which show that our compression scheme introduces little visible artifacts (Sec. 6.3). We present a sensitivity study of the key parameters in our compression scheme (Sec. 6.4). Finally, we discuss how we can accommodate a diverse range of users (Sec. 6.5).
### Area and Power Overhead
**Performance.** Our algorithm along with the hardware support achieves real-time compression. The CAU operates with a cycle time of about 6 _ns_, which translates to a frequency of about 166.7 MHz. The Adreno 650 GPU used in Oculus Quest 2 operates at a nominal frequency of 441 MHz, which means during each CAU cycle (at most) three pixels are generated by a shader core in the GPU. Given that the Adreno 650 GPU has 512 shader cores, each CAU cycle \(512\times 3\) pixels (i.e., 96 tiles) are generated. Therefore, we configure our CAU to have 96 PEs, which are able to process 96 tiles simultaneously, matching peak throughput of the GPU.
Thus, when compressing a \(5408\times 2736\) image (the highest rendering resolution on Oculus Quest 2), compression adds a delay of 173.4 \(\mu\)s, negligible in a rendering pipeline that operates at, say, 72 FPS with a frame time budget of 13.9 _ms_.
**Area and Power.** Our compression hardware extension introduces little area overhead, which consists of that of the Pending Buffers and the PEs. Each PE has an area of 0.022 _mm_\({}^{2}\), resulting in a total PE size of 2.1 _mm_\({}^{2}\). Each Pending Buffer holds data for two tiles (double buffering); the total buffer size is 36 KB, resulting in a total area of 0.03 _mm_\({}^{2}\).
The area overhead is negligible compared to the size of a typical mobile SoC. For instance, the Xavier SoC has an area of 350 _mm_\({}^{2}\) (12 nm) (Beng et al., 2019), Qualcomm Snapdragon 865 SoC has a die area of 83.54 _mm_\({}^{2}\) (7 nm) (Beng et al., 2019), and Apple A14 SoC has a die area of 88 _mm_\({}^{2}\) (5 nm) (Beng et al., 2019). The power consumption of each PE and its buffer is about 2.1 \(\mu W\), resulting in a total CAU power consumption of about 201.6 \(\mu W\), which we faithfully account for in the power saving analyses later.
### Results
**Compression Rate.** Fig. 10 shows the bandwidth reduction of our algorithm compared to the baselines. Our algorithm achieves a compression rate of 66.9%, 50.3%, and 15.6% over NoCom, SCC, and BD, respectively. Unsurprisingly, the highest gains are against NoCom, which is the original frames and uses 3 Bytes (24 bits) to store each pixel.
SCC (Sec. 5.3) is able to map all the \(2^{24}\) (about 16.8 million) sRGB colors to a small subset of only 32,274 colors.
Figure 11. Distribution of bits per pixel across the three components: base, metadata, and \(\Delta\). Left: BD; Right: our algorithm.
Figure 10. Bandwidth reduction over baselines.
SCC thus uses 15 bits to represent a color, reducing the storage cost compared to the original frames but is still much less efficient than BD, which is the canonical Base+Delta approach to compression DRAM traffic in today's mobile SoCs. Compared to BD, we show 15.6% (up to 20.4%) higher compression rate, because of our ability to exploit human color discrimination to reduce the magnitudes of \(\Delta\)s.
We get the least improvement over PNG. In two scenes, PNG actually has a higher compression rate. This matches prior results on BD (Zhao et al., 2017) and is not surprising -- to get a high compression rate PNG is computationally intensive and is meant to be used offline; see discussion in Sec. 5.3.
**Understanding Results.** Our improvement over BD comes from the fact that we require fewer bits to store the \(\Delta\)s. Fig. 11 shows the average number of bits per pixel required to store the base, metadata, and \(\Delta\)s in a tile. We compare the statistics between BD (left bars) and our scheme (right bars). It is clear that the space reduction comes from reducing the number of bits required to store the \(\Delta\)s.
To dissect how our scheme reduces the magnitude of \(\Delta\)s, Fig. 12 shows the distribution of tiles across the two cases in Fig. 6: \(\mathsf{HL}>\mathsf{LH}\) (c1) and \(\mathsf{HL}<\mathsf{LH}\) (c2). We observe that c2 is the more common case: 78.92% tiles result in this case. In c2, there exists a common plane where all the color values can collapse to. We can reduce the \(\Delta\) to 0 in these tiles, essentially eliminating the need to store \(\Delta\).
**Power Reduction.** We evaluate the power reduction under different resolutions and frame rates available on Oculus Quest 2. Fig. 13 shows the power savings under each combination over BD. Across all configurations, we reduce the power consumption by 307.2 \(mW\) on average. The power saving is a combination of reducing the DRAM traffic and the power overhead of the CAU encoding (201.6 \(\mu W\)).
Even on the lowest resolution and frame rate combination on Oculus Quest 2, we reduce the power consumption by 180.3 \(mW\), which translates to about 29.9% of the total power measured (using Oculus' OVR Metrics Tool (Cowley et al., 2017)) when rendering without compression. Under the highest resolution and frame rate combination, the power saving increases to 514.2 \(mW\). As resolution and frame rate will likely increase in future VR devices, the power benefits of our compression scheme will only increase.
### User Studies and Analyses
Fig. 14 shows the number of participants who did _not_ notice any artifact in each scene. On average, 2.8 participants (standard deviation 1.5) out of 11 total participants observe artifacts. This percentage is on par with prior color perception studies (Dong et al., 2018; Wang et al., 2018). We further interviewed the participants and identified three reasons why certain participants notice artifacts, all of which were orthogonal to the fundamental idea of this paper and actually point to optimization opportunities in _other_ parts of the system, which, when exploited, can be readily integrated into our work.
One participant who noticed subtle artifacts in three out of the six scenes was a visual artist with "color-sensitive eyes." Observer variation is a known phenomenon in vision science since the early days of color science research (Zhao et al., 2017; Zhao et al., 2017; Zhao et al., 2017). Given that color discrimination models in the literature all target the average case in the population, the results indicate that customizing/calibrating the model for individual users is likely a promising approach to reduce the artifact.
Another set of participants noticed artifacts only during rapid eye/head movement but not with a steady pose. This is likely attributed to external factors such as rendering lag or slow gaze detection, which is independent of our algorithm.
Finally, we found that no participant noticed any artifact in the fornite scene, which is a bright scene with a large amount of green. Since our compression algorithm generally yields green-hue shifts (see examples in Fig. 9), artifacts are less noticeable in scenes that are green to begin with. In contrast, dumbo and monkey, both dark scenes, have the most noticeable artifacts. The results suggest, to the
Figure 14. Number of participants (out of 11) who did _not_ notice any artifacts in each scene in our user study.
Figure 12. Distribution of the two cases c1 and c2.
Figure 13. Power saving over BD under the lowest and highest resolutions and four different frame rates on Oculus Quest 2.
vision science community, the need for improving the color discrimination models in low-luminance conditions.
**Objective Image Quality.** To show that subjective experience, which is the focus of our work, is _not_ equivalent to objective quality, we evaluate the Peak-Signal-to-Noise-Ratio (PSNR), a common objective quality metric, of all the compressed images. On average, the PSNR of the compressed videos is 46.0 dB (standard deviation 19.5); all but two scenes have a PSNR below 37. A PSNR value in this range usually indicates noticeable visual artifacts (Dosov et al., 2016), which is confirmed by our participants when they view the compressed images on a conventional display. This result accentuates the crux of our work: use human color perception in VR to guide a numerically lossy scheme (hence low PSNR) for higher compression rate with little subjective quality degradation.
### Sensitivity Studies
Our evaluation so far assumes a tile size of \(4\times 4\). We also evaluate our compression algorithm across different tile sizes; the results are shown in Fig. 15 along with BD. We observe that the compression rate drops once the tile size increases beyond \(4\times 4\) and can be worse than BD when the tile size is larger than \(8\times 8\).
The trend is the result of two opposing effects. On one hand, as we increase the tile size we can amortize the cost of storing the base pixels. On the other hand, larger tiles also present less opportunity to bringing pixels together, because we have to accommodate the worst case/largest difference between two pixels in a tile (Sec. 3.1).
### Discussions
To accommodate individual color perception in actual system deployments, one can perform a per-user color calibration procedure to build a per-user ellipsoid model. Such a procedure is laid out in prior work (Shi et al., 2017), and is readily doable. Such user-specific calibrations are routinely done when a user first uses an AR/VR product, e.g., adjusting the pair of displays to accommodate different inter-pupil distances among individuals. When a per-user ellipsoid model is available, our fundamental idea readily applies.
It is worth noting that we can, if need be, easily turn off our compression algorithm, which is intentionally designed as a plug-and-play stage between normal GPU rendering and existing BD compression (see Fig. 7). One scenario where one might want to turn off our compression is when a user has color vision deficiency (CVD). The color discrimination model that underlies our compression algorithm does not consider individuals with CVD. When such models for CVD become available, our fundamental idea readily applies.
## 7. Related Work
**Perception-Aware Rendering.** A host of recent work has focused on leveraging human perception to optimize AR/VR systems. Weier et al. provide a relatively recent survey (Weier et al., 2017). The most studied approach is foveated rendering, which reduces rendering resolution in the visual periphery (Shi et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Foveated rendering has been theoretically studied to reduce data transmission traffic in cloud rendering (Shi et al., 2017; Wang et al., 2017), but the decoding (reconstruction) cost is prohibitively high (e.g., need a complicated DNN). Our approach is orthogonal to foveated rendering in that we focus on adjusting colors rather than the spatial frequency, and works directly on top of the existing (BD-based) framebuffer compression framework without adding decoding cost.
**Color Perception in Systems Optimizations.** Color perception is most often leveraged to reduce display energy. To our best knowledge, this is the first paper that leverages color perception to reduce data communication energy.
Dong et al. (Dong et al., 2017), Crayon (Crayon et al., 2017), Dong and Zhong (Dong and Zhong, 2017) all leverage the human color discrimination to reduce OLED power, which is known to strongly correlate with color. Duinkharjav et al. (Duninkharjav et al., 2017) extend this approach to VR by quantifying the eccentricity dependent color discrimination. Recent work by Dash and Hu (Dash and Hu, 2017) builds an accurate color-vs-display power model. None focused on reducing data traffic. Shye et al. (Shye et al., 2017) and Yan et al. (Yan et al., 2017) leverage dark adaptation to reduce display power. Dark adaptation will likely weaken the color discrimination even more, potentially further improving the compression rate -- an interesting future direction.
**Data Traffic Optimizations in VR.** Data traffic reduction in VR has mostly been studied in the context of client-cloud collaborative rendering, i.e., reducing wireless transmission traffic. The pioneering Furion (Furion, 2017) system and later developments and variants such as Coterie (Coterie, 2017) and Q-VR (Dash and Hu, 2017) cleverly decide what to rendering locally vs. remotely. For instance, one could offload background/far objects rendering to the server and render foreground/near object interactions locally. EVR (Dash and Hu, 2017; Wang et al., 2017) predicts user FoV trajectory and pre-renders VR videos in the cloud. Our proposal is orthogonal
Figure 15. Bandwidth reduction over NoCom under BD and our scheme with different tile sizes denoted by \(T_{n}\), where \(n\) is the tile size.
to the client-remote collaborative rendering, in that we focus on reducing DRAM traffic occurred within a local device.
Zhang et al. [76] describe a BD design in encoding frame-buffer traffic. We directly compare against this approach and show up to 20% bandwidth savings. Zhang et al. [75] propose a content cache that exploits value equality in video decoding, which does not apply to encoding where strict equality is rare. Rhythmic Pixel Regions [34] drops pixel tiles to reduce DRAM traffic in a machine vision pipeline, whereas our focus is human visual perception in VR.
Any compression algorithm, ours included, exploits data similarities. Recent work leverages data similarities to speed-up rendering [77, 78, 66, 74] by eliding redundant computations (that compute same/similar data). These methods, however, do not reduce data traffic, which we do.
**General Memory Compression.** Exploiting value similarities to compress data traffic is a long-standing technique in architecture [47, 48]. Recent work in approximate computing extends compression to tasks that can tolerate slight data infidelity such as image processing [43, 53] and texture mapping in rendering [61, 70]. In comparison, this paper performs a principled "approximate compression" by 1) using a rigorous human perception model derived from psychophysical experiments and 2) formulating compression as a constraint optimization with an optimal solution (under necessary relaxations). Finally, we specifically target VR and, thus, exploit the eccentricity dependency that is unconcerned with before.
## 8. Conclusion
Aggressively lossy compression in the numerical domain can achieve significant data traffic reduction with little perceptual quality loss in VR. The key is to leverage human color discrimination (in)ability to bring pixels more similar to each other. The resulting images, thus, permit more aggressive compression over the classic Base+Delta scheme to reduce DRAM traffic in a mobile SoC. We show that our compression algorithm has an analytical form, which, when accelerated by a dedicated hardware, can achieve real-time compression. Future VR systems design must actively integrate human perception into the optimization loop.
|
2310.10652 | BRC-20: Hope or Hype | BRC-20 (short for Bitcoin Request for Comment 20) token mania was a key
storyline in the middle of 2023. Setting it apart from conventional ERC-20
token standards on Ethereum, BRC-20 introduces non-fungibility to Bitcoin
through an editable field in each satoshi (0.00000001 Bitcoin, the smallest
unit), making them unique. In this paper, we pioneer the exploration of this
concept, covering its intricate mechanisms, features, and state-of-the-art
applications. By analyzing the multi-dimensional data spanning over months with
factual investigations, we conservatively comment that while BRC-20 expands
Bitcoin's functionality and applicability, it may still not match Ethereum's
abundance of decentralized applications and similar ecosystems. | Qin Wang, Guangsheng Yu | 2023-08-31T02:59:52Z | http://arxiv.org/abs/2310.10652v1 | # BRC-20: Hope or Hype
###### Abstract
BRC-20 (short for Bitcoin Request for Comment 20) token mania was a key storyline in the middle of 2023. Setting it apart from conventional ERC-20 token standards on Ethereum, BRC-20 introduces non-fungibility to Bitcoin through an editable field in each stossish (0.00000001 Bitcoin, the smallest unit), making them unique. In this paper, we pioneer the exploration of this concept, covering its intricate mechanisms, features, and state-of-the-art applications. By analyzing the multi-dimensional data spanning over months with factual investigations, we conservatively comment that while BRC-20 expands Bitcoin's functionality and applicability, it may still not match Ethereum's abundance of decentralized applications and similar ecosystems.
BRC-20, Token standard, Non-fungibility, HCI
## I Introduction
To date (August 2023), Bitcoin [1] has been operating successfully for 15 years. In terms of market capitalization1, it currently holds the position of the 10th largest asset globally (US$590.74b), just behind Berkshire Hathaway (US$773.17b). Moreover, within the cryptocurrency space, Bitcoin remains the dominant player, accounting for over 53% of the market share2, far surpassing the second-ranking crypto-asset ETH (19.1%, Ethereum native token [2]). Despite its dominance, applications leveraging or operating on Bitcoin have been scarce due to its UTXO data structure [3], limiting its extensibility. Fortunately, recent developments with the emergence of a Bitcoin-fitted standard may change this situation.
Footnote 1: Global ranking, [https://companiesmarketep.com/](https://companiesmarketep.com/) {August 2023}.\({}^{2}\)Cryptocurrency charts, [https://coinmarketep.com/charts/](https://coinmarketep.com/charts/) {August 2023}.\({}^{*}\)CSIRO Data61, Australia
BRC-20, or Bitcoin Request for Comment 20 [4], is modeled after the Ethereum token standard indexed with ERC-20 [5] and was introduced in March 2023 by an anonymous developer known as Domo [6]. BRC-20 is basically Bitcoin's version of ERC-20, even with some major caveats like a lack of smart contracts. The similar parts come from it being the first token standard defined in Bitcoin, while the key distinction is that BRC-20 incorporates non-fungibility features from ERC-721 [5], making it a hybrid standard encompassing both ERC-20 and ERC-721 functionalities.
In Ethereum, non-fungible tokens (NFTs) [7] are implemented through smart contracts, where each user is assigned a unique token ID to claim ownership of a specific asset, such as JPEG files or Crypto punk images, stored off-chain on a server. In contrast, BRC-20 tokens are created through processes called _ordinal_ and _inscription_ (cf. Sec.II), which involves adding data to identifiable satoshis (the smallest unit of Bitcoin, 0.00000001 BTC). This data can represent user-customized metadata, ranging from unique identifiers to images, and is stored on-chain. When BRC-20 tokens are transferred, the inscribed data on the satoshis is also transferred via transactions, allowing users to mint NFTs on the Bitcoin network.
BRC-20 has prominently emerged as a focal point within the Bitcoin network, commanding significant attention as underscored by an array of market indicators including Bitcoin's block size, mempool transactions, and transaction fees. During the ferrov of the BRC-20 period spanning from early February 2023 to May 2023, several notable developments occurred [8]: (i) The average block size of Bitcoin experienced a substantial surge, leaping from 1.2MB to over 2MB. (ii) The volume of transactions within the memory pool demonstrated a consistent upward trajectory, nearing the 25,000 transaction mark. This contrasts with the relatively stable level of around 5,000 transactions that characterized much of 2022. (iii) Ordinal transaction fees exhibited a steady rise, concurrently driving an approximate 10% increase in non-Ordinal transaction fees throughout the entirety of March. (iv) The cumulative fees accrued from the hinting of Ordinal Inscriptions have now surpassed the 150 BTC milestone. Beyond that, various associated platforms/tools have further contributed to this trend: (v) Statistical resources like Ordinal Wallet [9], UniSat [10], and Dune Analytics [11][12] also corroborate the upward trajectory in minted Ordinals.
**Gaps in user perception.** Despite BRC's remarkable achievements within a short timeframe, its awareness remains surprisingly low. Even among seasoned blockchain researchers and developers (as gathered through informal random surveys without recorded responses), it's evident that very few are acquainted with BRC, Bitcoin token standards, or Bitcoin NFTs. Moreover, our explorations also unveiled that existing resources are inadequate for newcomers. While there are initial introductions to the concept (cf. the _final_ paragraph of Sec.I), they largely focus on providing a basic operational overview without digging into the multifaceted aspects involved. This realization motivates our pursuit of understanding this intriguing yet "enigmatic" term, and discerning its essence as either a beacon of _hope_ or a product of _hype_.
**Our attempts.** We approach this via three fundamental pillars.
\(\Leftrightarrow\)_Systematical contributions_. We extensively dive into the available open-access resources, encompassing blogs, wikis, forum posts, news articles, Git repositories, and a limited number of scholarly works, based on which, we methodically organize and present a clear and concise understanding of _what BRC is_ and _how it functions_ (Sec.II), marking a pioneering step in current research. Our exposition commences with an exploration of the fundamental structure of Bitcoin (Sec.II-A)
and progresses to elaborate on distinctive aspects like ordinals (Sec.II-B) and inscriptions (Sec.II-C), forming pivotal procedures within the BRC operation.
\(\mathtt{c}^{\otimes}\)_Quantitative contributions._ We embark on a comprehensive series of quantitative investigations across multiple dimensions to unveil the genuine dynamics and sentiment prevailing within the market. Our approach involves a meticulous examination of the market performance (Sec.IV) of a carefully selected group of representative tokens--comprising three BRC-20 and five ERC-20 projects--spanning a period of four months from the ascent of BRC to the point of composing this study. This analysis encompasses an assessment of various factors including price fluctuations, duration of popularity, market capitalization, and daily transaction volumes. Subsequently, we delve into the user responses evident in social media platforms through tweets (Sec.V) featuring specific hashtags during a randomly chosen recent week. This investigation involves the scrutiny of post content, languages used, influencers contributing to discussions, and the identification of potential fraudulent activities. Additionally, we delve into the historical mainstream prices of tokens (Sec.VI), delineating the trajectory of each token wave to ascertain the presence of a potential new BRC-formed wave.
\(\mathtt{c}^{\otimes}\)_Qualitative contributions._ We conduct a qualitative exploration (Sec.VII) that involves juxtaposing BRC-20 against established token standards (Sec.VII-A). Through this comparison, we derive both the advantages (Sec.VII-B) and intrinsic limitations (Sec.VII-C) of BRC-20. Building upon these observations (together with quantitative results), we further compile a review of the actualities and misconceptions present within user perceptions (Sec.VIII-A), culminating in our proposed implications to mitigate these aspects (Sec.VIII-B).
**Our results.** We present a series of significant findings from each investigated section, which we synthesize in Tab.I. Additionally, we offer an assessment of the level of both _hope_ and _hype_ within the BRC-20 ecosystem. In this context, _hope_ signifies the potential for sustainable prosperity, whereas _hype_ denotes a surge in interest driven by arbitrage, often accompanied by a risk of overvaluation. Upon comprehensive evaluations, we observe a slight predominance of the _hype_ (34) aspect over the _hope_ (27) element. This suggests that a more cautious sentiment towards this new concept should be taken into consideration. Meanwhile, it's important to note that the benchmark for our analysis is ERC-based markets (including BNB Chain, Avalanche, etc.), which may lead to a certain level of imbalance when comparing Bitcoin-related markets.
\(\Delta\)**Limitations.** Our investigations have certain limitations with respect to data collection. First, we acknowledge the _limited scope of our token portfolio_, which may introduce bias into our results. This limitation arises from our focus on a selected group of representative tokens, potentially excluding relevant others. The rationale behind this selection is that many tokens and projects exhibit strong correlations that might not necessarily contribute significantly to the overall market trends. Additionally, some tokens possess relatively low market capitalization and therefore may have limited impact on the broader market dynamics. Second, our analysis is constrained by _the short timeframe of tweet data_ collection. Due to resource constraints (majorly costs and human efforts), we conducted investigations over a randomly chosen week of recent tweets. While this data snapshot may not capture the entire range of market sentiments, it can still provide a reasonably representative picture of recent market performance. Furthermore, our assessment is partially based on _subjective summaries_ and informal surveys. We remind the potential for slight inaccuracies in this analysis, particularly on the market side, which is influenced by a multitude of factors.
**Related sources.** Rodarmor [13] introduced a scheme for assigning serial numbers to Bitcoin satoshis. A relatively complete introduction to ordinal theory can be found at [14]. Binance Research published several early reports [4][8][15] that delve into the development of BRC-20. Investigating the impact of Bitcoin Ordinals on transaction fees, Bertucci [16] concluded that ordinal inscriptions tend to incur lower fees compared to regular transactions. In parallel, Kiraz et al. [17] presented an alternative approach to settling NFT trades on the Bitcoin blockchain using zero-knowledge proofs, distinct from the ordinal method. Additionally, various media outlets have offered accessible explanations of this emerging concept [18][19][20][21]. Trevor.btc et al. have provided detailed coverage of the development of Ordinals/BRC-20 and hosted "The Ordinal Show" [22] podcast. Readers keen on further exploration can conduct searches using relevant keywords such as _BRC-20_, _Bitcoin NFT_, and _Ordinals_, along with associated techniques covering _UTXO_[23], _Taproot_[24] and _SegWit_[25] (cf. Sec.II) and surrounding applications (Sec.II-D).
## II BRC-20 Construction
### _Preliminary: Bitcoin UTXO & Transaction Fundamentals_
We begin by introducing the fundamental concept of the Unspent Transaction Output (UTXO) model, which serves
\begin{table}
\begin{tabular}{c|l|l|l|l|l|l|l|l|l|l|l|} \multicolumn{1}{c}{_Findings_} & \
as the underlying framework for Bitcoin transactions. In this model (Listing 1), the outputs of one transaction become the inputs for subsequent transactions, creating a continuous chain of transactions without the need for traditional accounts.
```
Tx0(output1:0.5btc)-->Tx2(input1:0.5btc) Tx2(output1:0.3btc) Tx3(input1:0.3btc) Tx1(output1:0.2btc)-->Tx2(input2:0.2btc) Tx2(output2:0.2btc,coinbase,output3:0.1btc,coinbase) Tx1(output2:0.1btc)
```
Each transaction is composed of inputs and outputs, where inputs refer to the outputs of previous transactions. In the UTXO model, the term _fee_ is used to define the difference between the total input and output amounts, which is then given to the miner who includes the transaction in a block.
Security in Bitcoin transactions is upheld by locking and unlocking scripts. The locking script (or scriptPubKey) sets the conditions that must be met to spend the output. On the other hand, the unlocking script (or scriptSig) is provided by the spender to meet these conditions and spend the output. It's also important to remember that 1 Bitcoin (BTC) equates to \(10^{8}\) satoshis. As miners prioritize transactions with a higher fee rate (\(\text{fee\_rate}=\text{fee\_size}\)), the block size is typically restricted to approximately 1MB.
### _Bitcoin Ordinals: Tracking Every Satoshi_
The second key step is achieving field uniqueness in BRC-20 by leveraging Bitcoin Ordinals, which index each satoshi based on its mining order. For example, the first-ever mined satoshi in the genesis block is indexed as 0 and can be accessed at [https://ordinals.com/sat/0](https://ordinals.com/sat/0). Ordinals provide versatility with multiple representation formats:
* _Integer notation_: The ordinal number itself, reflecting the order in which the satoshi was mined. For example, 2099994106992659.
* _Decimal notation_: The block height at which the satoshi was mined, followed by the offset within the block. For example, 3891094.16797.
* _Degree notation_: The last number is the order in which the sat was mined in the block, followed by the block height in degrees, such as \(3^{\circ}111094^{2}14^{*}16797^{*}\).
* _Percentile notation_: The position of the satoshi in Bitcoin's total supply, expressed as a percentage. For example, 99.99971949060254%.
* _Name_: An encoding of the ordinal number using the characters "a"-"z", such as "satoshi".
The FIFO (First-In-First-Out) principle applies once a satoshi becomes part of a transaction. Suppose a transaction involves two inputs, each containing three satoshis, and an output containing four satoshis. In that case, the output will include the first four satoshis from the combined inputs. As in Listing 2, each "[...]" represents an input or output, and each satoshi is indexed with a character from "a" through "z".
Fees are handled similarly. If a transaction has two inputs, each containing two satoshis, and one output containing three satoshis, the output will comprise the first three satoshis from the combined inputs, and one satoshi will be used as a fee and assigned to a Coinbase transaction.
```
[a:b:c][def:]-->[a:b:c][ef] [a:b][c:d]-->[a:b:c][d] Coinbasetx:[SUBSIDY][d]-->[SUBSIDY.d]
```
Listing 2: Tracking the tagged satoshi - FIFO
Within Bitcoin Ordinals, another noteworthy innovation emerges in the form of _rare satoshis_[26], pursuing the most significant milestones in satoshis, similar to the iconic example of _Bitcoin Pizza_[27]. These satoshis can be distinctly identified as having been mined from specific blocks.
* _Common_: Any that is NOT the first satoshi of its block.
* _Uncommon_: The first satoshi of each block.
* _Rare_: The first of each difficulty adjustment period.
* _Epic_: The first satoshi of each halving epoch.
* _Legendary_: The first satoshi of each cycle.
* _Mythic_: The first satoshi of the genesis block.
### _Inscriptions: Embedding Messages in Satoshis_
The third crucial step involves incorporating personalized content into each unique satoshi. This concept is known as _Inscriptions_. Inscriptions leverage the Ordinals protocol, enabling the direct embedding of content (details in Tab.II) into a satoshi in the form of JSON (JavaScript Object Notation, also refer to Sec.III-A). This transformation effectively turns satoshis into NFTs, making them vessels for arbitrary data.
The data is stored within the segregated witness (SegWit [23]) section of a transaction. SegWit is a protocol upgrade that enhances scalability by modifying how data is stored in a block. In SegWit-enabled transactions, the transaction size (\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
* _Numbered sequentially_. Each Inscription is systematically allocated a position as outlined by the Ordinal Theory. This introduces a distinct characteristic capable of conferring diverse levels of value upon distinct sequential creations, including Inscriptions minted following the block reward halving or the inaugural Inscription itself.
* _Scale limitation_. The Bitcoin block can accommodate a maximum of 4MB of data after the SegWit and Taproot upgrades. Considering that approximately 144 Bitcoin blocks can be mined daily, a total of about 210GB of space is available annually for Inscription minting (a single Inscription requires 4MB of space). In contrast, NFTs based on smart contracts lack such limitations, theoretically allowing for unlimited minting.
### _Extension: ORC-20 and Surroundings_
**OCR-20.** ORC-20 [28], created by OrcDAO, is an open standard designed to enhance the capabilities of ordered tokens on the Bitcoin network. It ensures seamless backward compatibility with BRC-20. Unlike BRC-20, which necessitates a _one-time transfer inscription_ in each transaction, ORC-20 allows for the reusability of the mint and _send_ ordinal inscriptions within a transaction.
**Surroundings.** We also investigate a series of supporting applications that are relevant to BRC-20 (Tab.III).
## III BRC-20 on Bitcoin Networks
### _Implementing BRC-20_
The design of implementation is to address the incompatibility between the stateless UTXO-based models of Ordinals and the stateful account-based approach of BRC-20. At the heart of this reconciliation is the use of inscriptions to record state transitions, transforming these immutable markers into auditable proofs. This method hinges on the construction and maintenance of an _off-chain state indexer_, which records the balance of each account. Inscriptions on the Bitcoin network then serve as triggers to update these off-chain states. In essence, BRC-20 has enabled three primary functions.
\({}^{\circledRightarrow}\)_Deploy a new token_. The operation initiates the creation of a new BRC-20 token (Deploy, Listing 4). It begins on-chain with the inscription of a satoshi to represent the deployment. This inscription contains several crucial details such as the protocol name (_brc-20_), operation (_deploy_), token's name (_tick_), the total amount of tokens to be issued (_max_), and the maximum amount of tokens to be minted in each minting round (_lim_). After this inscription is added to the Bitcoin network, an off-chain process verifies whether a state already exists for the given token name. If not, a new state is created, with the balance of each account initialized to zero or a pre-defined value and the token's properties (those defined in Inscriptions) added to the state. The on-chain inscription structure and the off-chain update are listed below.
```
#OnchainInscription"P":"brc-20",#protocolname":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":":">":">":">":">":">":">":":">":":">":">":">":">":">":":">":">":":">":":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":":">":">":">":":">":":">":">":">":">":":">":">":">":">":":">":":">":":">":">":":">":">":">":">":">":":">":">":">":">":">":":">":">":":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":":">":">":">":">":":">":">":">":">":":">":":">":":">":">":">":">":">":">":">":":">":">":":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":":">":">":">":">":":">":">":":">":">":":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":":">":">":":">":">":":">":">":">":">":">":":">":":">":":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":":">":":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":":">":">":">":":">":">":">":":">":":">":":">":":">":">":":">":":">":":">":">":":">":">":":">":":">":">":">":">":":":">":":">":":">":":">":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":">":">":">":">":">":":">":">":">":">":">":":">":">":":">":">":":
#'onchainInsertipion
"P": "brc-20", #protocol name "op": "transfer", #operation "tick": "_ordi", #token name "ammt": "100" # the amount of token being transferred
# Off-chainupdate
if state[tick] NOT exists:
raise errors
if state[tick]["balance"][sender] >= amt:
account_state[tick]["balance"][sender] -= amt
account_state[tick]["balance"][receiver] += amt
### _Operating BRC-20 (NFT) on Bitcoin_
**The PSBT standard.**PSBT, short for partially signed Bitcoin transactions, is a Bitcoin standard (BIP-174 [28]) that enhances the portability of unsigned transactions and enables multiple parties to easily sign the same transaction. A PSBT is created with a set of UTXOs to spend and a set of outputs to receive. Then, the information of each UTXO necessary to create a signature will be added. Once the PSBT is prepared, it can be copied to a program capable of signing it. For multi-signature wallets, this signing step can be repeated using different programs on separate PSBT copies. Multiple PSBTs, each containing one or more necessary signatures, will later be combined into a single PSBT. Finally, the fully signed PSBT can be broadcast via networks.
**Transaction workflow.** Building upon this standard, we present a complete cycle for trading a BRC-20 transaction.
\(\Leftrightarrow\)_Seller's Operation._ A seller uses a transaction to inscribe a satoshi, indicating a transfer operation of a certain amount of BRC-20 tokens (e.g., _1000__ordi_). The inscribed satoshi manifests the seller's intent to sell the stated amount of tokens and carries detailed information, including the protocol name (_brc-20_), the operation (_transfer_), the token name (_ordi_), and the transfer amount (e.g., _1000_).
\(\Leftrightarrow\)_Creation of PSBT._ Next, the seller incorporates the inscribed satoshi as an input in PSBT. To set the starting bid, the seller designates an output in the PSBT for _the seller transfers 0.2 BTC to their own address._ This action signifies the seller's intention to exchange _1000__ordi_ tokens for _0.2 BTC._
\(\Leftrightarrow\)_Publishing the PSBT._ Then, the seller publishes the PSBT to a marketplace, allowing potential buyers to review the transaction details and decide whether they wish to proceed.
\(\Leftrightarrow\)_Buyer's Operation._ If a buyer finds the _1000__ordi_ package appealing, they can select and finalize this PSBT. It indicates the buyer is willing to complete the exchange by providing the required funds (_0.2 BTC_ in this case) and, in return, receiving the inscribed satoshi from the seller.
\(\Leftrightarrow\)_Finalizing the PSBT._ Upon completing the PSBT, the buyer broadcasts it to the Bitcoin network. This entails sending the transaction data to the network, where it will be included in a future block and ultimately confirmed. Once included in a block, the transaction becomes visible to all network participants and becomes irreversible.
\(\Leftrightarrow\)_Off-chain State Updates._ After the on-chain finalization of the PSBT, the off-chain states need to be updated to reflect the new balances of the buyer and the seller. The buyer's _ordi_ token balance increases by _1000_, while the seller's _ordi_ token balance decreases by the same amount. Simultaneously, the seller's Bitcoin balance increases by _0.2 BTC_, while the buyer's Bitcoin balance decreases accordingly.
It is worth noting that the protocol necessitates two on-chain transactions to finalize the Transfer operation, ensuring a secure settlement for the trade between sellers and buyers.
## IV Token Investigations over Months
**Investigation overview.** We have specifically selected representative projects, including the foremost three BRC-20 projects (ORDI, MOON, OSHI), each boasting a market capitalization3 surpassing US$10 million. Additionally, we include the top five ERC-20 projects (MATIC, SHIB, WBTC, DAI, LINK) each with a market capitalization4 exceeding US$4 billion. Our data spans a period of four months, commencing from April (prior to the BRC craze) and extending through August (the present date of this study's composition).
Footnote 3: Top BRC-20 coin explorer: [https://www.coingecko.com/en/categories/br](https://www.coingecko.com/en/categories/br) c-20 [Aug 2023].
Footnote 4: Top BRC-20 coin explorer: [https://coincodex.com/cryptocurrencies/secto](https://coincodex.com/cryptocurrencies/secto) r/thereum-erc20/ [Aug 2023].
### _Price and Marketcaps Trends_
As **price trends** unfold (cf. Fig.1(a)), BRC-20 related tokens, represented by ORDI and MOON, exhibit sharp price increases shortly after their launch. This rapid appreciation in price is indicative of a surge in demand, likely driven by heightened market interest in these new offerings. However, such rapid increases in price can also signal overvaluation, particularly if they are not backed by strong fundamentals.
In contrast, ERC-20 related tokens, with the exception of SHIB, tend to show more stable price trends. It suggests that these coins' prices are less likely to be influenced by short-term market sentiment and more likely to reflect their intrinsic value. In particular, stablecoins like DAI can serve as a reliable store of value in the often volatile crypto markets
Examining **marketcap trends** (see Fig.1(b)), we observe substantial expansion for BRC-20 coins subsequent to their introduction. This growth is not solely attributed to price escalation; rather, it signifies increased coin circulation, implying a burgeoning user community and broader adoption of these coins. However, akin to the price dynamics, rapid market capitalization growth bears a dual nature: it can signal a coin's promise, yet it might also signify hype if the acceleration is overly swift and lacks sustainability.
**Finding-IV.****: _Users are rapidly entering the BRC market within a span of one month, but they may face the fact of losing their enthusiasm shortly in the subsequent months._**Finding-IV.****: _Compared to ERC-like tokens, BRC-based tokens constitute a small portion of the overall market size._**
### _Average Return_
The indicator **average return** represents the percentage change in price, serving as an indicator of profitability: a higher average return indicates greater gains for an investor
who bought the coin at the beginning of the period and subsequently sold it. The chart illustrated in Fig.2(a) visually displays the mean returns of the three BRC-20 tokens (_blue_ bars) and the five ERC-20 tokens (_red_ bars). Evidently, the BRC-20 tokens, notably ORDI and MOON, demonstrate markedly higher average returns when compared to most ERC-20 tokens (possibly due to experiencing a high return rate during their initial launch rather than their current stable period). This suggests that over the observed duration, BRC-20 tokens may have presented an enhanced potential for profitability. It's worth noting that SHIB boasts a high return rate, aligning with the characteristics of memecoins like Dogecoin.
**Finding-IV.0**: _Certain BRC-20 tokens have demonstrated a remarkable return rate, often exceeding tenfold compared to equivalent tokens within the same period._
### _Volatility Analysis_
The concept **volatility**, typically quantified as the standard deviation of returns, embodies a measure of risk: heightened volatility signifies greater price variability and consequently
Fig. 1: Comparison on trends
elevated risk. As depicted in Fig.2, we discern that, except for ORDI, the BRC-20 coins exhibit higher volatilities in comparison to the majority of ERC-20 coins. This observation implies that throughout the assessed period, BRC-20 coins might have entailed increased risk. This observation aligns with the earlier insight that BRC-20 coins also yielded superior returns, reinforcing the tenet that elevated returns are often accompanied by elevated risk. Conversely, with the exception of SHIB, the remaining ERC-20 tokens manifest greater stability, characterized by a narrower range of price fluctuations. We postulate that SHIB's substantial and abrupt fluctuations may stem from its memecoin attributes, rendering it particularly sensitive to market dynamics, such as significant movements instigated by prominent market participants.
**Finding-IV.6: _BRC-20 tokens showcase elevated volatilities and associated risks, aligning with their substantial returns._**
### _Performance Analysis_
In our evaluation, we examine their **performance** using the Sharpe ratio5[29], a risk-adjusted return metric, to assess the efficacy of BRC-20 and ERC-20 tokens. The outcomes presented in Fig.2 reveal that, within the chosen tokens, both BRC-20 and ERC-20 tokens exhibit a diverse spectrum of Sharpe ratios, signaling varying levels of risk and return within these two token categories. It shows a diverse range of Sharpe Ratios, with DAI displaying a significantly negative value, while others like SHIB and WBTC exhibit modest positive ratios. A negative Sharpe Ratio might be indicative of a high-risk, low-reward scenario, often associated with market hype and speculative trading. On the other hand, a positive Sharpe Ratio could signal a more balanced risk-reward profile, hinting at the genuine potential or "hope" in the investment. The presence of these dynamics in BRC-20 markets may suggest a complex landscape, where both hope and hype coexist.
Footnote 5: Calculated as \(\mathsf{Sharpe Ratio}=\frac{\mathsf{Average Return}\cdot\mathsf{InitialFore Rate}}{\mathsf{Standardization}\cdot\mathsf{InitialFore Rate}}\)
: **Finding-IV.6: _BRC-20 tokens demonstrate heightened return rates alongside increased risks, with both the absolute values surpassing those observed in ERC-like tokens._**
### _Correlation Analysis_
The **correlation matrix** analyzing the daily returns yields insights into the relationships among chosen assets (Fig.3). Among the BRC-20 tokens (ORDI, MOON, and OSHI), their correlation coefficients with each other are notably elevated, indicating a robust positive linkage in their price movements. This suggests that BRC-20 tokens, as a collective, tend to exhibit synchronous shifts, possibly due to shared market perception, common underlying methodologies (rooted in ordinals), or interdependencies within the ecosystem (such as shared developers/buyers). The pronounced correlations within BRC-20 group highlight their lack of independence in a portfolio context, a crucial factor to consider in devising strategies.
Among the ERC-20 tokens (MATIC, SHIB, WBTC, DAI, and LINK), the correlation coefficients also generally exhibit positivity, albeit with less intensity compared to the BRC-20 tokens. This disparity could stem from the more established and diverse landscape of the ERC-20 token market, encompassing a wider spectrum of blockchain applications.
A comparison between these two categories unveils discernible variations in correlation coefficients. While some movements overlap, distinctive traits remain. For instance, BRC-20's ORDI demonstrates a strong positive correlation with ERC-20's LINK and WBTC, indicating a similar response to market conditions. In contrast, BRC-20's MOON exhibits a lower correlation with these ERC-20 tokens, implying distinct market dynamics at play.
Fig. 3: Correlation
Fig. 2: Evaluations on prevalent BRC-20 and ERC-20 projects
**Finding-IV.0**: _BRC-20 tokens exhibit closely positive correlations among themselves, stronger than those in ERC-like tokens. The correlations between BRC-20 and ERC-20 tokens, however, display relatively weak connections._
### _Usage Trend_
We proceed to compare **daily Bitcoin transactions** with **Ordinal inscriptions** (BRC-20 tokens) as depicted in Fig.4. The findings reveal a steady growth in the volume of Ordinal inscriptions (orange segments in bars). The cumulative count of Ordinal inscriptions (green line) exhibits a clear upward trajectory, indicating a progressive surge in the utilization and adoption of BRC-20 tokens over time.
However, the growth of Ordinal inscriptions should not be viewed in isolation. Non-ordinal Bitcoin transactions (blue segments in bars) still form a significant portion of daily transactions. This suggests that while BRC-20 tokens are gaining traction, traditional Bitcoin transactions remain prevalent.
**Finding-IV.0**: _Bitcoin inscriptions have witnessed a consistent growth, yet they still represent a minor fraction of daily transactions within the overall network activity._
## V Sampled Sentiment Investigations
**Investigation overview.** Our experiments involve gathering public tweet data from a randomly selected week (August 5th to August 9th, 2023) to delve into the prevailing perceptions and attitudes toward BRC-20. The data gathered in our experiments amounts to approximately 2 megabytes and spans interactions with around 4,112 tweet users that mentioned the hashtag #brc20 or similar. We opted for this particular week as it closely aligns with the timeframe of paper composition.
### _Sentiment Reactions_
We also analyze the **user sentiment** and the public perception of BRC-20 and Ordinals.
Fig.5 reveals a largely neutral sentiment across all metrics - users, tweets, and potential impact - with positive sentiment following closely behind. This distribution could be indicative of a cautiously optimistic stance toward these tokens. However, negative sentiment is minimal, comprising less than 1% in all cases. The minimal presence of undefined sentiment suggests that most discussions are clear.
Fig.6 (time series) illustrates the daily sentiment counts, showing that neutral sentiment is consistently the most prevalent, followed by positive sentiment. Negative sentiment remains relatively low throughout the investigated period. A noticeable spike in undefined sentiment around August 7 might suggest a moment of uncertainty or controversy in the discourse, but it was short-lived.
The sentiment analysis suggests that the BRC-20 and Ordinals are currently viewed more with hope than hype. The dominance of neutral and positive sentiments, coupled with the minimal negative sentiment, indicates a generally optimistic perception. Nonetheless, since our investigation timeframe is relatively brief and sentiment tends to oscillate with market dynamics, maintaining continuous monitoring would be prudent to observe any shifts in public opinion.
**Finding-V.0**: _Users who are inclined to express opinions have a non-negative attitude towards BRC-related concepts._
### _Tweets with Relevant Hashtags_
#### V-B1 **Tweet stats**
We conduct an examination of Twitter data surrounding BRC-20. Notably, most contributors opt for web-based tweeting (Fig.7(b)), indicating a higher level of attention spent when accessing BRC-20 content compared to mobile users. Furthermore, the distribution of tweet data is well-balanced (Fig.7(a)), supported by the fact that the majority of contributors post just one tweet. This minimizes the potential for biased outcomes stemming from excessive tweeting by a single individual.
Fig. 4: Daily count of BRC-20 upon Bitcoin transactions
Fig. 5: Sentiment distribution
Fig. 6: Sentiment actions count by Tweets
We have knowledge that a predominantly neutral sentiment observed across users, tweets, and impact suggests a cautiously optimistic view within the community (Fig.6). This relative optimism is reinforced by the fact that the majority of tweets are longer (160 to 200 characters, Fig.7(d)).
The diversity in the age of Twitter accounts engaged in the conversation, ranging from newly created to those over six years old, reveals an appeal that transcends different segments of the community (Fig.7(c)). The broad international interest, as evidenced by the primary languages being English and Chinese, underlines the global appeal of BRC-20 (Fig.7(e)).
In terms of influence, the participation across various follower counts, from micro-influencers to major influencers, highlights an inclusive conversation that extends beyond a niche audience (Fig.7(f)). The consistency in engagement, regardless of the number of followers, adds credibility to the BRC-20 conversation.
**Finding-V.0**: _BRC-20 appeals to users across various regions and age groups._
#### Iv-A2 **(Non-)Scam among users**
We first analyze the relationship between user types (normal users vs influencers) and tweet types (scam vs non-scam) in the context of the BRC-20 hashtag. Users were categorized based on specific criteria: influencers were identified from the "Most Popular" and "Highest Impact" lists, while normal users were those not listed as influencers. Tweets were classified as scams or non-scam based on the presence of certain keywords, repeated messages, and patterns indicative of pyramid selling.
Fig.9 unveils a significant distinction between influencers and normal users. While influencers posted fewer tweets overall, a higher proportion of their tweets were classified as scams. This suggests that many influencers may be leveraging their popularity to engage in questionable practices like pyramid selling, possibly with the intent to manipulate the market or deceive followers. The content of their tweets may not reflect a genuine interest in BRC-20, indicating a potential agenda to exploit the hype surrounding the cryptocurrency.
In contrast, normal users predominantly engaged in non-scam tweets, contributing to informative and meaningful discussions about BRC-20. Their engagement pattern reflects a genuine interest in the subject, possibly even involving actual exchange processes of BRC-20. The higher volume of non-scam tweets among normal users reflects authentic interests in BRC-20, unlike the controlled narrative pushed by influencers.
**Finding-V.0**: _While BRC-20 holds the risk of artificial manipulation, the dominantly controlling influence remains within legal and constructive boundaries._
Fig. 8: Popular users with the highest impact.
Fig. 7: Tweet stats related to BRC-20
## VI Investigation Compared to Historical Peaks
**Investigation overview.** We conduct an examination of historical crypto market data spanning ten years from 2013 to 2023, encompassing nine prominent tokens including BTC, LTC, DOGE (BRC-type), ETH, BNB, AVA (ERC-type), USDT, USDC, and BUSD (stablecoin). By correlating this historical data with major real-world market waves, we aim to discern if the peaks or prosperity of each market coincide with significant narratives. This macroscopic analysis provides insights into whether BRC represents a genuine wave in tokenomics.
### _Tokenwaves in History_
Based on these price trends, several notable waves in the token market are obtained. The initial peak, predating 2013, can be attributed to the flourishing crypto market driven primarily by the terror surrounding Bitcoin mining activities and its PoW mechanism [30]. As a pioneering force, Bitcoin's impact set the stage for the valuation of the entire cryptocurrency landscape. Following this, the subsequent peak around 2017 aligns with Ethereum's development, sparking a surge in _initial coin offerings_ (ICOs) [31]. ICOs facilitated fund-raising by exchanging Ethereum (ETH) via the ERC20 standard [5] for native tokens of various projects, thereby attracting widespread user engagement and diverse investments. This wave was later succeeded by _initial exchange offerings_ (IEOs) [32] and analogous _initial development offerings_ (IDOs) [33].
Following a two-year cooling-off period, a notable resurgence took place in the mid-2020s, characterized by the rise of _decentralized finance_ (DeFi) [34]. DeFi encompasses a range of on-chain financial protocols that mirror traditional market functions, including lending, borrowing, contracts, leverage, and securities. Subsequently, starting in 2021, the spotlight shifted to _non-fungible tokens_ (NFTs) [7] within the Ethereum ecosystem. These distinct digital assets are utilized to represent ownership or validate authenticity for digital artworks, collectibles, and virtual real estate. This trend was further propelled by subsequent developments like the _play-to-earn_ concept [35] and the growing influence of _Web3_[36] in 2022. As we progress into 2023, the continued activity in the token space remains evident with the deployment, minting, and transfer of inscriptions on the Bitcoin network via _BRC-20_[4].
**Finding-VI.0**: _BRC-20 appears to be emerging as a new narrative during 2023, propelling a fresh tokenwave._
### _Comparison and Correlation_
We observed a common movement pattern among most tokens (both BRC-like and ERC-like) except for stablecoins (including USDT, USDC, BUSD). This suggests that token prices are intrinsically interconnected and are influenced by dominant tokens like BTC and ETH. Stablecoins, on the other hand, exhibit a distinct trend, remaining independent of market tokens and maintaining stable values pegged to the US dollar. The broader wave of tokenomics appears to have minimal impact on their fundamental value, except in cases like the Luna-UST collapse [37] where major design flaws were evident. We can infer that the surge in Bitcoin prices during the BRC's popularity period indirectly amplifies positive sentiments across the entire token market.
**Finding-VI.0**: _The patterns of BRC-20 waves align with broader trends observed in the cryptocurrency market._
## VII Investigation From Inherent Features
**Investigation overview.** In contrast to previous quantitative measurements, this section presents a qualitative evaluation from three perspectives: a comparison with other standards, its positive attributes and impacts, as well as notable limitations that must not be disregarded.
### _Compare with Existing Standards_
The majority of token standards in competitive blockchains (summarised in Tab.IV), such as BEP-20/721 (BNB Smart Chain), ARC-20/721 (Avalanche), and XRC-20/721 (XDC Network [38]), draw inspiration from the Ethereum repository. These ERC-like standards share common attributes, adhering to the 20-track standard for fungibility and the 721-track standard for non-fungibility. NFTs in these chains possess programmable smart contracts, allowing for limitless issuance.
Contrastingly, BRC-like standards [39][40] integrate uniqueness into transaction payloads, stemming from their limited units (sats). This results in non-fungible tokens being transacted through a combination of regular transactions
Fig. 10: Cryptocurrency prices all through years
Fig. 9: (Non-)Scam tweets among users
and specific operations. On the flip side, ERC-like standards achieve distinctiveness via a parameter called the token ID in functions (cf. Algm.1), potentially utilizing various functions in an upper-layer operation. This gives rise to diverse token standards with features like 1155/3525/3475. Transfers within this framework rely on state transitions facilitated by contracts operating on the chain. We present more differences in Tab.V.
This divergence also translates into disparities in popularity. ERC-compatible chains thrive with active developers and Dapps, attracting a larger user base. Conversely, BRC-like chains often grapple with a dearth of active developers, hampering the initiation of innovative approaches.
```
1interface ERC721 { function ownerOf(uint256_tokenID) external view returns (address); function transferFrom(address_from, address_to, uint256_tokenId) external payable;... }
```
**Finding-VII.0**: BRC-20 stands out distinctly from ERC-like standards due to its structure, leading to a shortage of active developers and on-chain applications._
### _Advantages To be Highlighted_
**System stability.** Stability is primarily dependent on the network of distributed miners and their commitment. Augmented stability is achieved through two primary avenues.
* _New players._ As explained in Sec.II, the tracing of each satoshi requires the utilization of ORD software. This means that despite the availability of user-centric solutions like Ordinals markets, individuals wanting full control over the entire Ordinal procedure and the creation of an Inscription must operate a Bitcoin full node (rather than a lightweight node). This element, among others, has led to a marked rise in accessible Bitcoin nodes. The more active Bitcoin full nodes there are, the greater the decentralization of the Bitcoin network becomes.
* _Increased revenue._ The incorporation of ordinal inscriptions intensifies congestion within the Bitcoin blockchain, leading to an upward trajectory in fees and bolstering miners' earnings. This provides miners with advantages and enhances their commitment to the system. This advancement holds promise for the long-term sustainability of the Bitcoin blockchain, as its viability heavily relies on substantial transaction fees. The introduction of supplementary application layers, like ordinals, holds the potential to sustain heightened congestion. This, in turn, alleviates concerns about liquidity shortages or inadequate transaction volumes.
**Infrastructure construction.** Driven by BRC, Bitcoin's advancements in infrastructure and DApps have also led to substantial progress (also refer to Tab.III). Notably, Bitcoin wallets like Hiro and Xverse have rapidly expanded their support for BRC-related protocols, swiftly introducing products such as the BRC Explorer. Additionally, even the Bitcoin NFT market, traditionally centered around Stacks-based projects, has undergone a transformation with the recent launch of Gamma's Ordinals marketplace. Following closely, Magic Eden introduced its Bitcoin NFT marketplace. Esteemed NFT studios such as Yuga Labs and DeGods have also joined this movement, unveiling Ordinals-based projects within the past month. This surge in innovation is not confined to Bitcoin's base layer; it's equally evident within Bitcoin Layer2 solutions like the Lightning Network, Liquid, Rootstock, and Stacks.
**Finding-VII.0**: _The emergence of BRC enhances system stability and fosters the development of complementary tools._
### _Limitations Cannot Be Ignored_
**Costly.** We noted that the protocol requires two on-chain transactions to complete the transfer operation, which is costly and less user-friendly, while additionally, the granularity of exchanges is limited to the amounts defined in each PSBT. Moreover, the inscribed satoshis become invalid after use, necessitating the inscription of a new satoshi for each new transaction, which deviates from the original concept of Ordinals as long-lasting, meaningful inscriptions.
**Increased fees.** Similarly, analyzing transaction fees spanning December 2022 to April 2023, as outlined in [16], it's evident that significant total fees accrue with substantial inscriptions, signifying larger transactions. Importantly, a clear positive correlation emerges between Bitcoin ordinal inscriptions and transaction fees across diverse transactions, which contributes to the overall block fees. Consequently, the integration of ordinal inscriptions amplifies congestion within the Bitcoin blockchain, resulting in an upward trajectory of fees. This will raise concerns for regular users.
**Stateless.** BRC-20 and Ordinals continue to grapple with the challenge posed by the inherently stateless nature of UTXO transactions and the state models that most applications demand. The advancement of these protocols and their
\begin{table}
\begin{tabular}{l|c|c|c c c|c|c}
**Standard** & **Time** & **Network** & **Description** & **Description** & **Application** \\ \hline ERC-20 & 2015 & Ethereum & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ ERC-721 & 2017 & Ethereum & ✗ & ✗ & ✗ & ✓(SC) & NFT \\ ERC-1155 & 2018 & Ethereum & ✗ & semi & ✗ & ✓(SC) & Game \\ ERC-3525 & 2022 & Ethereum & ✗ & semi & ✓ & ✓(SC) & Equity \\ ERC-3475 & 2022 & Ethereum & ✗ & semi & n/a & ✓(SC) & Equity \\ \hline BEP-20 & 2021 & BSC & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ BERP-212 & 2022 & BSC & ✗ & ✗ & ✗ & ✓(SC) & NFT \\ ARC-20 & 2022 & Avalanche & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ ARC-721 & 2022 & Avalanche & ✗ & ✗ & ✗ & ✓(SC) & NFT \\ ARC-20 & 2023 & XDC & ✓ & ✓ & ✓ & ✓(Tx) & Currency \\ XC-721 & 2023 & XDC & ✗ & ✗ & ✓(SC) & NFT \\ \hline DRC-20 & 2023 & DogCoin & ✓ & ✗ & ✗ & ✓(Tx) & NFT \\ LTC-20 & 2023 & Lincoln & ✓ & ✗ & ✗ & ✓(Tx) & NFT \\
**BRC-20** & 2023 & Bitcoin & ✓ & ✗ & ✗ & ✓(Tx) & NFT \\ \end{tabular}
\end{table} TABLE IV: Comparison with competitive standards
capacity to accommodate more comprehensive functionalities, including a versatile virtual machine, hinges on the ongoing market incentives and the sustained appreciation of coin value.
**Centralization.** The escalating size of the Bitcoin network may discourage users from running their nodes due to increased requirements for downloading a copy of the network. Currently, most BRC-20 wallets necessitate running a full node, a practice not commonly embraced by regular users. As a result, users resort to third-party APIs, potentially creating centralized security vulnerabilities. Although different indexers can be connected for cross-validation, this requires additional steps and understanding from the users' side.
**Meme-nature.** Presently, a significant portion of the BRC-20 tokens in circulation, such as ORDI, PEPE, MOON, and others, predominantly belong to the category of meme coins. Due to the absence of consensus among communities and the lack of support for smart contracts, these tokens offer minimal practical utility and are notably swayed by trends in social media sentiment. Although this phenomenon sparks speculative interest, the tokens' limited functionality and the consequent dearth of a robust holder base suggest a potential vulnerability to abrupt, unforeseen value declines.
**Finding-VII.0**: _BRC brings network congestion, leading to increased reliance on centralized tools and rising fees. Additionally, it retains its inherent limitation of extensibility._
## VIII User Perception and Implications
### _Reality and Misconceptions in User Perception_
**Realities.** Based on aforementioned investigations, several realities emerge from user perceptions in the BRC-20 landscape.
* _Genuine interest in BRC-20._ Users exhibit an enthusiastic interest in novel crypto concepts, actively participating in discussions (**V.0**) across social media platforms (**V.0**). They demonstrate their commitment to the market by investing time and funds, which is reflected by its market performance (**IV.0**) and a trend of tokenwaves (**VI.0**).
* _Noteworthy capital returns._ BRC-20 tokens present a remarkable market performance, showcasing substantial returns (**IV.0**) that outpace the performance of equivalent tokens in other categories (**IV.0**&**C0**).
* _Interconnected ecosystem._ The BRC-20 ecosystem reveals an interconnected network of tokens (**IV.0**), indicating a close interdependence of user perceptions and behaviors within this specific subset of tokens.
* _Driving innovation._ Moreover, the advent of BRC-20 has acted as a catalyst for driving innovation in the field, leading to the development of complementary tools and contributing to the overall stability of the system (**VII.0**).
**Misconceptions.** In contrast, our investigations have also uncovered certain misconceptions.
* _Ephemeral enthusiasm._ User enthusiasm for new concepts often follows a cyclical pattern of initial excitement followed by a potential decline in engagement (**IV.0**), particularly if immediate benefits are not realized (**IV.0**&**C0**).
* _Limited market size._ BRC-related markets still occupy a relatively small share compared to larger markets like Bitcoin's daily transaction volume (**IV.0**) or the market capitalization of ERC-like tokens (**IV.0**).
* _Dependency on dominance._ Much like derivative tokens, the trend of many BRC-20 tokens appears to be influenced by a select few dominant projects such as Ordi (**VI.0**), as well as social influencers (**V.0**).
* _One-Sided development._ The majority of developed tools are built upon existing data sources like web browsers or account-related APIs, rather than introducing novel logical innovations like those found in smart contracts--reflecting an inherent limitation (**VII.0**&**C0**).
### _Towards Enhancement_
**Improving user awareness by education.** As our investigations revealed, a prevalent lack of understanding among both non-professional and professional users regarding the fundamental concepts of BRC, and even the operational intricacies of Bitcoin itself, let alone the workings of BRC within the Bitcoin network. This limited comprehension leads to sparse discussions in public channels, with mere hundreds of tweets about BRC compared to thousands about Ethereum or even millions about a popular singer's new song. Among these mentions, the majority remain superficial, lacking substantive content. To enhance users' awareness and understanding of BRC and Bitcoin NFTs, two viable approaches stand out. Firstly, the establishment of an educated community through platforms like MOOCs and easily accessible YouTube videos could be pivotal. Open forums could address security concerns, while independent implementations on GitHub channels could offer potential solutions. For instance, BRC has been interpreted by prominent companies and media outlets, like Binance. Secondly, facilitating the creation of competing BRC services and third-party tools by developers can yield quick responses to user needs, encompassing NFT-related functions such as creation, purchase, auction, and exchange, particularly for technical users. Several third-party tools have already emerged for BRC to aid users in facilitating user experiences.
**Encouraging communities for further engagement.** Independent tools and services have consistently occupied a promi
\begin{table}
\begin{tabular}{c l l} \hline \hline & **Bitcoin NPT** & **Other NFTs** \\ \hline
**Protocol form** & Ordinal & ERC-721, ERC-1155, SPL \\
**Description Storage** & Inscription & NFT \\
**Code update** & Entity on-chain & Partially on IPFS/Arveware \\
**Codep** & Not allowed & Depends on contract code \\ \hline
**Mining** & Not possible without a node, need & DMostly can directly interact \\
**Trading** & via third-party designed services & with the webpage \\ \hline
**Extensibility** & Difficult due to Bitcoin’s & Ensier due to programmable \\
**Consumption** & High due to POW consensus & smart contracts \\ \hline
**Pros** & Scarcity, rarity-aware & Mainstream contract mode, \\ & Low block speed, no bulk mining & high user base \\
**Cons** & Difficulties in mining/trading, & No special gymnticks or fame, \\ & Wallet entry is complex & easily overlooked \\ \hline \hline \end{tabular}
\end{table} TABLE V: NFT Comparisons
nent position within the space of BRC-20 and its associated communities. Diverse applications have been developed to enhance the user experience of BRC-based products. For instance, volunteers from Cryptokoryo [11] and Datalaways [12] have created statistical services that showcase insightful trends related to Ordinals, Inscriptions, and BRC-tokens. Additionally, various media outlets provide dedicated sections to succinctly summarize the latest news relevent to BRC. BRC explorers have also been implemented to provide real-time price fluctuations. These tools significantly contribute to increasing user understanding of basic mechanisms while alleviating concerns about potential drawbacks. The seamless integration of third-party tools with other existing services, in particular DeFi protocols [41] and cross-chain technologies [42], adds value and has the potential to enhance adoption.
**Attracting new attentions.** BRC-20 also draws inspiration from the NFT landscape, which has demonstrated remarkable growth over the past couple of years. Users who have actively engaged in NFT trading and gaming activities (such as minting, participating in airdrops, etc.) are likely to exhibit an inherent curiosity in exploring BRC NFTs, provided there are no significant barriers. It would be prudent for BRC developers to offer tools that facilitate interoperability across various blockchain ecosystems, including Ethereum, Polygon, Binance Smart Chain, and Avalanche. When compared to new users entering the traditional market, those migrating from established Web3 ecosystems yet to be fully developed offer a vast and readily accessible user base.
## IX Conclusion
In this paper, we dig into the novel concept of BRC-20. We elucidate its operational mechanisms and empirically conduct a range of tangible investigations, encompassing market performance and user sentiments. Recognizing that user perception plays a pivotal role in shaping the nature of BRC, we subsequently explore the dichotomy between hope and hype, which significantly influence user perception. Our findings lead to a conservative conclusion that while BRC-20 represents a promising inception within the Bitcoin ecosystem, it may not attain the same level as ERC-like ecosystems.
|
2309.04750 | Mirror-Aware Neural Humans | Human motion capture either requires multi-camera systems or is unreliable
when using single-view input due to depth ambiguities. Meanwhile, mirrors are
readily available in urban environments and form an affordable alternative by
recording two views with only a single camera. However, the mirror setting
poses the additional challenge of handling occlusions of real and mirror image.
Going beyond existing mirror approaches for 3D human pose estimation, we
utilize mirrors for learning a complete body model, including shape and dense
appearance. Our main contributions are extending articulated neural radiance
fields to include a notion of a mirror, making it sample-efficient over
potential occlusion regions. Together, our contributions realize a
consumer-level 3D motion capture system that starts from off-the-shelf 2D poses
by automatically calibrating the camera, estimating mirror orientation, and
subsequently lifting 2D keypoint detections to 3D skeleton pose that is used to
condition the mirror-aware NeRF. We empirically demonstrate the benefit of
learning a body model and accounting for occlusion in challenging mirror
scenes. | Daniel Ajisafe, James Tang, Shih-Yang Su, Bastian Wandt, Helge Rhodin | 2023-09-09T10:43:45Z | http://arxiv.org/abs/2309.04750v2 | # Mirror-Aware Neural Humans
###### Abstract
Human motion capture either requires multi-camera systems or is unreliable using single-view input due to depth ambiguities. Meanwhile, mirrors are readily available in urban environments and form an affordable alternative by recording two views with only a single camera. However, the mirror setting poses the additional challenge of handling occlusions of real and mirror image. Going beyond existing mirror approaches for 3D human pose estimation, we utilize mirrors for learning a complete body model, including shape and dense appearance. Our main contributions are extending articulated neural radiance fields to include a notion of a mirror, making it sample-efficient over potential occlusion regions. Together, our contributions realize a consumer-level 3D motion capture system that starts from off-the-shelf 2D poses by automatically calibrating the camera, estimating mirror orientation, and subsequently lifting 2D keypoint detections to 3D skeleton pose that is used to condition the mirror-aware NeRF. We empirically demonstrate the benefit of learning a body model and accounting for occlusion in challenging mirror scenes. The project is available at: [https://danielajisafe.github.io/mirror-aware-neural-humans/](https://danielajisafe.github.io/mirror-aware-neural-humans/).
## 1 Introduction
Estimating detailed 3D geometry of a moving person from a single video is a long-standing goal. Learning-based solutions can succeed when trained on 3D labels from the target domain or when multiple 2D views are available for supervision [28, 31, 35, 36, 37, 42, 46]. However, multi-view capture is expensive and tedious to calibrate, and hence, the diversity of existing datasets and associated machine learning solutions are limited to mainstream activities and environments.
We propose a test-time optimization method for reconstructing a generative body model entailing pose, shape, and appearance using a single camera and a mirror and starting from 2D pose without any 3D labels nor large-scale dataset. The mirror setting is practical: First, mirrors are readily available in urban environments and provide a second view for accurate reconstruction without requiring multi-camera recording and temporal synchronization. Second, off-the-shelf 2D estimators generalize well since diverse training images are easily annotated by clicking 2D joint locations. Previous works [8, 25] leveraged reflections in mirrors for better human pose reconstruction. However, neither of them model shape and appearance in detail. In addition, the model proposed in [8] needs a 3D pose estimator as prior, potentially limiting their approach to motions close to the training set.
Alternatively, 3D body models have been learned from monocular video. However, existing approaches either use a shape prior, such as a scan of the person [15], a parametric body model [2], restrict motions to be simple [52], or require initialization with a prior 3D estimator [40, 41, 50].
Figure 1: **Pose refinement and image quality**. Given an image with mirror (top left) our mirror-based method reconstructs 3D pose and shape that is more accurate than the baselines (A-NeRF [40] and DANBO [41]) not supporting the mirror, both in terms of the 3D pose metric PA-MPJPE (top row, e.g., corrected arms), and in image quality PSNR (bottom row, e.g., reconstructed earphone and left elbow).
This restricts the possible motion, shape, and appearance complexity. By contrast, our mirror setting is simple, enabling anyone to collect 3D data of their target domain.
Our approach for learning _Mirror-Aware Neural Humans_ makes no prior assumption about the body shape by building upon the open-source articulated neural radiance fields (NeRF) models [40, 41], which require only the joint angles of a skeleton as input. We estimate this skeleton fully automatically and without prior assumptions on the 3D poses using an automatic mirror calibration (Step 1) and mirror-based 2D to 3D pose lifting (Step 2), thereby avoiding the use of 3D pose estimators that struggle with occlusions and extreme poses. Our core contribution are as follows:
* Designing a robust algorithm that estimates mirror position and orientation, and 3D skeleton model with bone-relative coordinates suitable for neural body models.
* A layered mirror model, extending NeRF with occlusion handling of the mirror image by the real person.
* Developing a complete motion capture system for reconstructing human pose, shape, and appearance from mirror images, and making the source code available.
## 2 Related Work
**Self- and weakly supervised learning approaches.** Weakly supervised 3D pose estimators typically leverage small-scale 3D pose datasets and combine them with additional 2D data [4, 14, 16, 22, 45, 47, 48, 54]. Others utilize neural networks that are pretrained on the 3D lifting task and transfer them to another dataset [10, 11, 12, 26, 27, 33]. Such weak supervision transfers better to unseen poses. However, they still make the assumption that training poses are close to the training set.
A different approach is using multi-view supervision to learn an embedding of 3D poses [31, 35, 36, 28, 37, 42] or learn the 3D reconstruction step directly from multi-view images [18, 19, 38, 46]. While promising, they still require multiple temporally synchronized cameras for training. In contrast, using mirrors in a scene gives the unique advantage of having a pair of synchronized views with a single recording device.
**Mirror geometry and calibration.** Mirrors have a long history in visual computing on which Reshetouski et al. [34] provide a good overview. We take inspiration from methods [1, 17, 29, 43] employing mirrors for camera calibration and 3D reconstruction of rigid objects, to enable calibration and reconstruction of moving humans. Alternatively, Yin et al. [53] reconstructs arbitrary objects in mirror-like surfaces but do not show any application for humans.
**Mirror-based Human Pose Estimation.** Nguyen et al. [30] use mirrors to reconstruct human point clouds, but require a depth camera together with two or multiple mirrors. To the best of our knowledge, the most related work that reconstructs human pose and body shape with a single mirror is from Fang et al. [8]. They provide an optimization-based approach that utilizes mirror symmetry constraints for predicting 3D human pose and mirror orientation. While attaining high accuracy, they require as input an initial 3D pose estimate from a pretrained neural network that cannot generalize well to unseen poses. Moreover, their best results are attained using manually annotated vanishing lines on the mirror boundary [7]. By contrast, we use a purely geometric approach to optimize for 3D keypoints without requiring any 3D pose estimator or mirror annotation (with the neural network only modeling shape and appearance), by jointly optimizing for the bone orientation and building upon recent work on estimating camera position and ground plane using the motion of people in the scene [9, 44]. Similar to prior approaches [8, 25], we estimate 3D human keypoints as a solution to an optimization problem between two sets of mirrored 2D keypoints. By contrast, Liu et al. [25] optimize for 3D joint coordinates which can lead to incorrect pose sequences where, for example, bone lengths vary over time, and orientation remains ambiguous. Fang et al. [8] restrict motions to be close to previously captured sequences by using pre-trained detectors, and none of these methods take detailed reconstruction of shape and appearance into account.
## 3 Method
Our goal is to reconstruct a dense neural body model from a single video with only sparse 2D detections as input, using the mirror as a second view to impose multi-view constraints. The difficulty lies in reconstructing such dense representation from only sparse and noisy 2D labels, with an unknown mirror and camera configuration. By contrast to classical multi-view settings, mirror observations add the difficulty of the real person occluding the mirror image. To overcome these difficulties, our method goes from sparse to fine details in three steps, as sketched in Figure 3.
For each step we use a suitable representation for the mirror geometry, each mathematically equivalent yet implying a different implementation. Figure 2 visualizes the three forms. _Case I:_ A single camera \(\mathbf{c}\) with light rays reflecting on the mirror plane \(\pi\). _Case II:_ The mirror image stemming from a _virtual camera_\(\bar{\mathbf{c}}\) opposing the real camera \(\mathbf{c}\). _Case III:_ A _virtual person_\(\bar{\mathbf{p}}\) opposing the real person \(\mathbf{p}\), both viewed from the real camera \(\mathbf{c}\).
### Camera and Mirror Initialization (Step 1)
We start from a video that shows a person moving in front of a mirror and use off-the-shelf pose detectors [6, 51] to obtain 2D pose estimates \(\mathbf{q}^{(t)}\in\mathbb{R}^{2\times J}\) for every input frame \(\mathbf{I}_{t}\) and all \(J\) joints. As we assume the mirror is orthogonal to the ground, mirror and real images appear to be standing on the same ground plane and existing solutions to using the human as calibration object apply. We use a variant of
[9] as described in [3] that yields focal length \(f\) and ground plane normal \(n_{g}\).
Associating real and mirror poses.Pose detectors are not aware of the mirror and therefore treat each person independently. We associate the pose with the largest neck-to-pelvis distance as the real person, utilizing that the person viewed through the mirror is farther away and hence smaller in the perspective camera. This association is also required for flipping the left and right side of the mirrored 2D pose to account for the mirror operation. Figure 4 shows this relationship and the degradation when misassigned.
Mirror Geometry Initialization.Under the assumption that the mirror normal is orthogonal to the ground plane normal, we obtain the 3D location of the real person and mirrored person using _Case III_ (see Figure 2). We project their 2D ankle locations \(\mathbf{q}_{\text{mkle}}\) onto the estimated ground plane by reversing the projection
\[\mathbf{q}=\mathbf{K}\mathbf{p},\text{ where }\mathbf{K}=\begin{pmatrix}f&0&o_{1}&0 \\ 0&f&o_{2}&0\\ 0&0&1&0\end{pmatrix}, \tag{1}\]
with \((o_{1},o_{2})\) the image center and \(f\) the estimated focal length.
The mirror normal \(\mathbf{n}_{m}\in\mathbb{R}^{3}\) is then the vector from real \(\mathbf{p}_{\text{mkle}}\) to mirror \(\bar{\mathbf{p}}_{\text{mkle}}\),
\[\mathbf{n}_{m}=\frac{\mathbf{p}_{\text{mkle}}-\bar{\mathbf{p}}_{\text{mkle}} }{\|\mathbf{p}_{\text{mkle}}-\bar{\mathbf{p}}_{\text{mkle}}\|}. \tag{2}\]
The mirror location is the midpoint, \(\mathbf{m}=(\mathbf{p}_{\text{mkle}}-\bar{\mathbf{p}}_{\text{mkle}})/2\). For increased robustness, we average over all frames.
### 2D to 3D Pose Lifting (Step 2)
In this section, we use the notion of a virtual camera (_Case II_) positioned behind the mirror as shown in Figure 2. Following [21, 39], we derive the virtual camera through the matrix \(\mathbf{A}\) that mirrors points across the mirror plane,
\[\mathbf{A}=\begin{bmatrix}1-2n_{x}^{2}&-2n_{y}n_{x}&-2n_{z}n_{x}&-2n_{x}d\\ -2n_{y}n_{x}&1-2n_{y}^{2}&-2n_{y}n_{z}&-2n_{y}d\\ -2n_{z}n_{x}&-2n_{y}n_{z}&1-2n_{z}^{2}&-2n_{z}d\\ 0&0&0&1\end{bmatrix}, \tag{3}\]
with \(\mathbf{n}_{m}=[n_{x},n_{y},n_{z}]\) the mirror normal and \(d\) the distance between camera and mirror. Both quantities are from Step 1. By defining the real camera to be at the origin pointing along the z-axis, \(\mathbf{A}\) maps points from the real to the virtual camera. The orientation of the virtual camera is hence \(\bar{\mathbf{R}}=\mathbf{A}_{3\times 3}^{\top}\), the inverse of the top-left part of \(\mathbf{A}\), and camera position \(\bar{c}=-2\mathbf{n}_{g}d\), is the negative of the last column of \(\mathbf{A}\). Note that \(\bar{\mathbf{R}}\) is from the orthogonal group \(O(3)\) as it includes a reflection component given by the mirror.
Mirror Skeleton representation.To be able to reconstruct not only the position but also the orientation of limbs, we represent \(\mathbf{p}^{(t)}\) with a skeleton parameterized by joint rotations \(\mathbf{\theta}_{i}^{(t)}\in\mathbb{R}^{6}\), using the 6D rotation parameterization of [55], bone lengths \(\mathbf{\ell}\in\mathbb{R}^{J}\), and the 3D pelvis position \(\mathbf{p}_{\text{pelvis}}^{(t)}\in\mathbb{R}^{3}\) (the root position). Forward kinematics gives
\[\mathbf{p}_{j}^{(t)}=\prod_{i\in\mathcal{N}(j)}\mathbf{T}_{i} \begin{bmatrix}\mathbf{0}\\ 1\end{bmatrix}+\mathbf{p}_{\text{pelvis}}^{(t)},\mathbf{T}_{i}=\begin{bmatrix} \mathbf{M}(\mathbf{\theta}_{i}^{(t)})&\mathbf{\ell}_{i}\mathbf{v}_{i}\\ \mathbf{0}&1\end{bmatrix}, \tag{4}\]
with \(\mathbf{v}_{i}^{\text{ref}}\in\mathbb{R}^{3}\) the \(i\)th bone vector (parent to child vector) in a reference pose, \(\mathbf{M}(\mathbf{\theta}_{i}^{(t)})\) the joint rotation computed from \(\mathbf{\theta}_{i}^{(t)}\), and \(\mathcal{N}(j)\) the ancestors of \(j\) in the kinematic chain. In the following, we optimize these parameters by using different constraints on the mirror scene including the pose, feet, and bone orientation.
3D pose initialization.Unlike prior work using joint locations [25], the bone rotation estimation we require is prone to local minima. To overcome this, we initialize with a constant standing pose at the estimated \(\mathbf{p}_{\text{mkle}}\), rotated in 45\({}^{\circ}\) steps from 0\({}^{\circ}\) to 360\({}^{\circ}\) degrees as shown in Figure 5, and select the rotation with the lowest reconstruction error before optimization.
3D pose optimization.Using the virtual camera (_Case II_ above), the optimization of the 3D pose \(\mathbf{p}\) under mirror constraints becomes a classical multi-view reconstruction problem. We optimize the skeleton parameterization \(\mathbf{\theta}\) that,
Figure 2: **Models for the mirror reflection**. In the first case, the rays go from the real camera \(\mathbf{c}\) up to the mirror plane \(\pi\) intersecting at location \(\mathbf{s}\), then to the real person \(\mathbf{p}\) after a mirror reflection. In the second case, the real person \(\mathbf{p}\) is viewed from a virtual camera \(\bar{\mathbf{c}}\) forming a virtual image. In the third case, the person location is mirrored to \(\bar{\mathbf{p}}\) and light rays go straight from camera \(\mathbf{c}\) to \(\bar{\mathbf{p}}\).
when reprojecting the associated 3D joint positions \(\mathbf{p}\) to the real and virtual cameras, minimizes the Euclidean distance to real 2D pose \(\mathbf{q}\) and virtual 2D pose \(\bar{\mathbf{q}}\),
\[\mathcal{L}_{\text{p}}=\sum_{t}\|\mathbf{q}-\Pi(\mathbf{p}(\boldsymbol{\theta} ^{(t)},\boldsymbol{\ell}))\|^{2}+\|\bar{\mathbf{q}}-\Pi(\mathbf{A}\mathbf{p}( \boldsymbol{\theta}^{(t)},\boldsymbol{\ell}))\|^{2} \tag{5}\]
with \(\mathbf{p}(\boldsymbol{\theta}^{(t)},\boldsymbol{\ell})\) the forward kinematic model and \(\Pi\) the perspective projection using \(\mathbf{K}\). When 2D detection confidences are available, we use them as weights in Eq. 5.
Smoothness and ground-plane constraints.Frame-wise pose optimization leads to noisy reconstructions and inconsistent global orientations. To mitigate these, we encourage a constant velocity for location and joint angles across the video (for valid frames), referred to as _location_ and _orientation smooth_ in Eq. 6. To reduce floating, we utilize our ground plane estimate and constrain the lower feet by minimizing their distance to the ground plane, as described in Eq. 6 where \(\mathbf{f}_{gd}=(\mathbf{m}-\mathbf{f}_{i})\), \(\mathbf{m}\) is the mirror location and \(\mathbf{f}_{i}\) is the closest foot (heel) to the ground. Lastly, we refine the mirror and ground normal, \(\mathbf{n}_{m}\) and \(\mathbf{n}_{g}\), during optimization and enforce both quantities to be orthogonal.
Figure 4: **Real and mirror pose assignment.** Our algorithm distinguishes the real from the virtual person using pelvis-to-neck distance. With the right assignment, cases of collapsed poses (left) are corrected (right).
Figure 5: **3D pose initialization.** by measuring the error between initial re-projections (lines forming skeleton) and 2D detections (dots) to determine the optimal starting pose.
Figure 3: We start from a mirror image with an unknown mirror geometry. With only 2D detections and suitable assumptions, we reconstruct the mirror plane, ground plane, and 3D keypoints in Step 1 and Step 2. Our optimization yields bone orientation that is crucial for integrating NeRF with the mirror-based reconstruction. The final Mirror-aware Neural Human is learned via layered composition of mirror and real images in Step 3 and yields improved body pose, shape, and appearance quality.
We combine all additional objectives from above as
\[\mathcal{L}_{\text{sfo}} =\underbrace{\lambda_{p}\|\frac{\mathbf{d}^{2}\mathbf{p}(\boldsymbol{ \theta}^{(t)},\mathbf{b})}{\mathbf{d}_{t}^{2}}\|}_{\text{location smooth}}+ \underbrace{\lambda_{\boldsymbol{\theta}}\|\frac{\mathbf{d}^{2}\boldsymbol{ \theta}_{k}}{\mathbf{d}_{t}^{2}}\|}_{\text{orientation smooth}}+\underbrace{ \lambda_{f}(\mathbf{n}_{g}\mathbf{f}_{gd})^{2}}_{\text{foot constraint}}\] \[+\underbrace{(\mathbf{n}_{g}\mathbf{n}_{m})^{2}}_{\text{orthogonality}}+ \underbrace{(\|\mathbf{n}_{m}\|_{2}-1)^{2}}_{\text{mirror normal loss}}+ \underbrace{(\|\mathbf{n}_{g}\|_{2}-1)^{2}}_{\text{ground normal loss}}, \tag{6}\]
where \(\frac{\mathbf{d}^{2}\mathbf{p}(\boldsymbol{\theta}^{(t)},\mathbf{b})}{\mathbf{ d}_{t}^{2}}\) and \(\frac{\mathbf{d}^{2}\boldsymbol{\theta}_{k}}{\mathbf{d}_{t}^{2}}\) are the second-order derivatives for the joint locations and bone orientations for all frames \(t\), \(\mathbf{f}_{gd}\) is the vector from ground plane location to the lower feet of interest, and \(\lambda_{p}\), \(\lambda_{\boldsymbol{\theta}}\), \(\lambda_{f}\) are hyper-parameters that balance the influence of the smoothness terms and the feet loss. Our final objective \(\mathcal{L}_{\text{pose}}\) is the sum of all individual terms, \(\mathcal{L}_{\text{Pose}}=\mathcal{L}_{\text{p}}+\mathcal{L}_{\text{sfo}}\).
### Neural Rendering and Refinement (Step 3)
With the 3D pose \(\mathbf{p}(t)\) reconstructed approximately for each pair of 2D detections \(\mathbf{q}(t)\) and \(\bar{\mathbf{q}}(t)\) in every frame of the video, we train a generative model \(G(\boldsymbol{\theta})\) conditioned on pose \(\boldsymbol{\theta}\). Starting from A-NeRF [40] applied to only the real person as a baseline, we introduce our Step 2 + A-NeRF, naive mirror integration, and full mirror integration with and without efficiency-improving extensions.
A-NeRF initialized by Step 2 (Step 2 + A-NeRF).To apply articulated neural radiance fields, such as [40] and [41], to our setting, we segment both persons in the image using [24]. We then use the real-mirror person assignment from Step 1 and Step 2 to determine the mask for the real person M that contains the 2D keypoints associated to the real person. Our contribution is on how to also include the mirrored person and its mask \(\bar{\mathbf{M}}\). For the real person, we can apply existing methods, besides minor modifications to run with our skeleton definition. Input is the image \(I\), skeleton \(\mathbf{v}^{\text{ref}}\), the bone lengths \(\boldsymbol{\ell}\) and joint angles \(\boldsymbol{\theta}\). We cast rays to the real person in the scene using pixels \((u,v)\) within the mask M, and query 64 points \(\{\mathbf{b}_{k}\}_{k=1}^{64}\) along that ray direction \(\mathbf{r}=\mathbf{K}^{-1}(u,v,1)\) within the 3D bounding box containing the skeleton \(\mathbf{p}\). By using the skeleton-relative encoding from [40] or [41], we first map the queried points \(\mathbf{b}\) to the local space of each joint using \(T_{i}\),
\[\tilde{\mathbf{b}_{i}}=T_{i}^{-1}(\boldsymbol{\theta}_{i},\mathbf{v}_{i})[ \mathbf{b}]. \tag{7}\]
A fully-connected neural network then predicts color \(\gamma_{k}\) and density \(\sigma_{k}\) as a function of the transformed queries, \(\gamma_{k},\sigma_{k}=\phi([\bar{\mathbf{b}}_{1},\dots,\tilde{\mathbf{b}}_{J}])\) for every sample \(k\). The image is formed by volume rendering, integrating color along the ray while accounting for the transmittance computed from the density, as in the original NeRF.
The objective is the photometric loss \(\mathcal{L}_{\text{Neural}}\) between the generated and observed image, and both the skeleton joint angles and the parameters of the underlying neural network are optimized jointly. Note that existing articulated NeRF models only apply as we tuned Step 2 to be compatible by introducing the additional smoothness constrained on bone orientation.
Layered mirror representation.Mirror occlusion cases, such as the one in Figure 6 where the real and mirrored person overlap, are important, as they result in errors in the segmentation masks and can lead to a few but large reconstruction errors. To make occlusion resolution part of the learning process but maintain efficiency, we automatically detect frames where occlusion occurs by measuring the intersection over union (IOU) of the bounding boxes \(\mathbf{N}\) and \(\bar{\mathbf{N}}\) enclosing the projected real and mirrored 3D poses from Section 3.2.
Given these boxes, we compute an intersection box that bounds overlapping areas and shoot rays randomly within the intersection box to resolve occluding pixels. Since each pixel is at an intersection of \(\mathbf{N}\) and \(\bar{\mathbf{N}}\), we process the occlusion samples along the direct view ray, \(\{\mathbf{b}_{k}\}_{k=1}^{64}\), and along its mirrored path, \(\{\bar{\mathbf{b}}_{k}\}_{k=1}^{64}\). _Case II_ gives the reflected view-ray
\[\bar{\mathbf{r}}=\mathbf{A}_{3\times 3}\mathbf{r}, \tag{8}\]
with the origin at virtual camera center \(\bar{\mathbf{c}}\). Note that we do not bound the occlusion samples to \(\mathbf{M}\) and \(\bar{\mathbf{M}}\) as the segmentation masks are often unreliable when people overlap.
Furthermore, sampling a different number of samples for occlusion rays does not fare well with the batch processing in NeRF. To make it compatible, we process real and mirror samples, including occlusion cases, independently to yield image layers \(\mathbf{L}\) and \(\bar{\mathbf{L}}\) and corresponding alpha maps \(\alpha\) and \(\bar{\alpha}\). We exploit that if occlusion happens, the real person occludes the mirrored person. This holds in general, since the mirror image results from an indirect light path that is always longer than the direct view, and enables combining these partial results using back-to-front layering. Starting
Figure 6: **Occlusion handling. First, we automatically generate 2D bounding boxes (in grey) from our optimized 3D keypoints. Then we shoot rays (dots) randomly in the intersection area (in green) where occlusions may happen (in yellow).**
from the background image \(\mathbf{I}_{\text{bg}}\), the final image is
\[\hat{\mathbf{I}}=\mathbf{L}\alpha+(1-\alpha)\left(\bar{\mathbf{L}}\bar{\alpha}+( 1-\bar{\alpha})I_{\text{bg}}\right). \tag{9}\]
This layered composition enables efficient training on \(\mathcal{L}_{\text{Neural}}\) and rendering in batches of the same size, while accounting for mirror occlusion.
Baseline w/o layering.Without the introduced layering, it would require to sample twice as many points over the union of both masks \(\mathbf{M}\) and \(\bar{\mathbf{M}}\), the samples \(\{\mathbf{b}_{k}\}_{k=1}^{64}\) along the ray up to the mirror and samples \(\{\bar{\mathbf{b}}_{k}\}_{k=1}^{64}\) after the mirror reflection. However, unless real and mirrored person occlude, one of the two sample sets is in empty space, thereby training NeRF to merely predict \(0\) density for that set, leading to extremely slow convergence.
Baseline w/o occlusionAssuming that the person masks \(\mathbf{M}\) and \(\bar{\mathbf{M}}\) are correct and non-overlapping, one can render two separate images of real and mirror person by respectively sampling \(\mathbf{M}\) and \(\bar{\mathbf{M}}\), using _Case II_ with the notion of two cameras. By minimizing the photometric objective, this baseline is efficient and utilizes the information across both views but does not account for mirror occlusions and imperfect masks around occlusion areas.
## 4 Evaluation
We compare our Mirror-aware Neural Human on the tasks of body pose and appearance reconstruction against state-of-the art methods to verify that existing NeRF-based body models benefit largely from the additional geometric mirror constraints, that every stage of our motion capture pipeline brings improvement, and ablate model choices. The supplemental document and video provide additional examples.
**Variants and Baselines.** We integrated our _Mirror-aware Neural Human_ formulation into A-NeRF [40] and DANBO [41] and refer to them as _Mirror A-NeRF_ and _Mirror DANBO_. We use the original A-NeRF [40] and DANBO [41] as baselines, as well as the mirror-based pose estimation method, MirrorHuman [8] and the established single-view reconstruction techniques SPIN [20] and SMPLify [32]. For pose estimation accuracy, we only compare to the A-NeRF neural body method as DANBO does not support pose refinement.
**Benchmark Datasets.** We use the _MirrorHuman-eval_ dataset from [8]. It contains a complex dancing performance in front of a large mirror and is captured with six cameras arranged in an arc around the mirror. We exclude camera 4 as it is directly frontal to the mirror and leads to degenerate configuration where the mirrored person is largely occluded. Since no validation set was specified, we use camera 2 and 3, that are placed between a 45-to-90 degree angle to the mirror, for tuning hyper parameters and test on all cameras as previous methods did. Following [8], we optimize the pose separately for each video, treating this dataset as five independent recordings instead of a multi-view setup. We also concur in evaluating Step 2 and 3 on every 100th frame while reconstructing in-between frames only to ensure smoothness. For the neural models, we withhold the last 10% of frames to test novel pose synthesis.
**Additional qualitative sequences.** To be able to demonstrate generality and showcase the simplicity of capturing with a mirror, we utilize the internet dancing recordings from [8] and recorded a new dataset that is more diverse in terms of appearance, e.g., including a beard, male and female, loose clothing, and casual everyday motions that go beyond dancing (see Figure 7). We used a single camera for recording and employed 2000 frames for reconstruction.
**Metrics.** For pose estimation tasks, we report the scale-normalized MPJPE (N-MPJPE) introduced in [35], and the Procrustes-aligned MPJPE (PA-MPJPE), both in mm over the 15 joints defined in [8]. We omit MPJPE without scale normalization as monocular reconstruction is inherently scale-ambiguous [13]. For image synthesis, we quantify the image quality by PSNR and SSIM [49].
**Implementation details.** In Step 2, we optimize 3D poses for 2K iterations. For Step 3, we train the neural rendering model up to a maximum of 300K steps for DANBO and a maximum of \(2\times 200\)K for A-NeRF with pose refinement and appearance fine-tuning.
### Mirror Normal Estimation
Our average normal estimation error (using 2D detections as input) is 0.4\({}^{\circ}\) compared to the GT normal provided in [8]. Camera 4 is excluded as the real person occludes the mirror image in most frames. This automatic mirror calibration is highly accurate and very close to the 0.5\({}^{\circ}\) obtained from the vanishing point method in [8] on the same cameras.
### Pose Estimation
**Comparison to refining pose with a dense body model.** Table 1 shows the results for the MirrorHuman-eval dataset. Compared to A-NeRF [40], which also refines SPIN estimates using a volumetric body model, our method improves 3D pose estimates significantly, by more than \(20\%\). This highlights the importance of integrating a second view for accurate reconstruction, here via the mirror. Figure 7 shows that, by benefiting from the NeRF model, the joint refinement of pose and shape improves significantly, particularly on extreme poses.
**Comparison to methods lifting 2D pose to 3D.** We outperform the supervised approach SPIN [20] and single-view optimization SMPLify-X [32], and match their combination, as these single-view approaches do not fare well under occlusion and are prone to depth ambiguity.
**Comparison to existing mirror approaches.** The prior
method [8] focuses on controlled conditions, using manually corrected 2D ground-truth (GT) and a pre-trained pose estimator for initialization. To compare on fair grounds, we run a variant that also uses 2D GT as input. Table 1 shows that it matches the accuracy up to 7mm. The remaining discrepancy can be attributed to our method not being tuned for GT input and not using a 3D pose estimator, which are known to not generalize well to very extreme motions. The other existing mirror work [25] cannot be compared to as it does not provide joint angles for shape reconstruction and does not evaluate on publicly available datasets.
### Body Shape and Appearance
Figure 1 and Figure 8 show the images synthesized by different neural rendering models. For better visualization, we apply connected component analysis to remove floating artefacts stemming from shadowing and background. On _MirrorHuman-eval_ dataset [8], both of our variants, Mirror A-NeRF and Mirror DANBO, synthesize sharper results with more fine-grained details compared to the mirror-less counterparts. We attribute the improvement to the better pose initialization from Step 2 and the additional appearance supervision by the mirror view, all enabled by our mirror modeling. Table 2 validates the visual improvement quantitatively in terms of image reconstruction accuracy. Both Mirror A-NeRF and Mirror DANBO better learn the body shape and appearance, verifying the effectiveness of our _Mirror-aware Neural Humans_. Additional qualitative
\begin{table}
\begin{tabular}{|l|c c c|c|} \hline Method & 3D Training & Mirror Calibration & 2D Input & PA-MPIPE \(\downarrow\) \\ \hline SMPEIry-X [32] & 3D pose prior & n/a & detections & 90.57 \\ A-NeRF [40] & partial (init.) & n/a & detections & 84.70 \\ SPIN [20] & supervised & n/a & detections & 67.42 \\ SPIN [20]+SMPEIry [32] & partial (init.) & n/a & detections & **61.47** \\ \hline Ours (Step 2) & unsupervised & automatic & detections & 63.00 \\ Ours (Step 2 + A-NeRF [40]) & unsupervised & automatic & detections & 62.69 \\ Ours (Step 3, w/o occlusion) & unsupervised & automatic & detections & 61.46 \\ Ours (Step 3, w occlusion) & unsupervised & automatic & detections & **61.30** \\ \hline Ours (Step 2, using GT input) & unsupervised & automatic & manual & 39.53 \\ MirrorHuman [8] (w/o mirror GT) & partial (init.) & automatic & manual & 33.24 \\ MirrorHuman [8] & partial (init.) & manual & manual & **32.96** \\ \hline \end{tabular}
\end{table}
Table 1: **3D pose reconstruction**. Ours is the only fully-automatic mirror-based method. We match the accuracy of off-the-shelf 3D pose estimators with only 2D detections as input and reproduce results of mirror methods using GT input [8].
\begin{table}
\begin{tabular}{|l|c c|} \hline Method & Cam 6 PSNR \(\uparrow\) & Cam 6 SSIM \(\uparrow\) \\ \hline A-NeRF [40] & 25.52 & 0.8662 \\ Ours(Mirror A-NeRF w/o Occlusion) & **25.89** & **0.9210** \\ DANBO [41] & 28.97 & 0.9193 \\ Ours(Mirror DANBO w/o Occlusion) & **31.87** & **0.9522** \\ \hline \end{tabular}
\end{table}
Table 2: **Quantitative image reconstruction accuracy** on _MirrorHuman-eval_ dataset in terms of PSNR and SSIM. A-NeRF [40] and DANBO [41] without mirror remain blurry, leading to lower scores.
Figure 7: **Qualitative results of pose refinement on a new diverse sequence, internet video, and _MirrorHuman-eval_ dataset [8]. Our Mirror A-NeRF aligns the skeleton model well to the images. Left: Before refinement (after Step 2) Right: After the volumetric model refinement (Step 3).**
results are shown in the supplemental document.
### Ablation Study
To analyze the effect of our model choices in Step 2, we use camera 3 in the _MirrorHuman-eval_ dataset over all 19 joints that have ground truth. We perform experiments with different configurations for the joint optimization of the bone factors, global position, and mirror parameters. Table 3 validates that each component contributes to the final result. The effect of and robustness to different weights of the smoothness terms are evaluated and explained in the supplemental material.
We also analyze the different ways of integrating mirror constraints and occlusion handling on cameras 6 and 7. Table 4 presents the pose refinement outcomes after the neural rendering steps (Section 3.3). Running A-NeRF with our Step 2 poses already improves due to the more accurate pose initialization. Our Mirror A-NeRF further improves, as it enables pixel-level multi-view (real and mirror pose) refinement. Taking occlusion into account further surpasses the two baselines. Note that the total improvement of handling occlusions computed over all frames and all joints is small and we therefore only enable it on sequences that include a sufficient number of occlusion frames.
## 5 Limitations and Future Work
Our 3D reconstruction algorithm only works in cases where the person's mirror image is visible most of the time, restricting the camera placement close to 20-to-70 degrees to the mirror. When the camera view direction is close to parallel or orthogonal to the mirror, 3D triangulation is unreliable. Moreover, the 3D reconstruction is sensitive to the initial reconstruction of the ground plane and focal length, e.g., errors emerging when bystanders with varying heights violate the constant height assumption. Moreover, since we rely on an estimated segmentation mask, sometimes body parts are cut out or the shadow or other background parts in the scene are represented in the 3D volumetric model. In the future, we will attempt to apply similar techniques to animal motion capture, which, however, requires redefining our upright standing assumption and filtering in Step 1.
\begin{table}
\begin{tabular}{|l|c c|} \hline Method & Cam 6 PA-MPJPE \(\downarrow\) & Cam 7 PA-MPJPE \(\downarrow\) \\ \hline A-NeRF [40] & 86.21 & 89.10 \\ Ours (Step 2 + A-NeRF) & 51.21 & 58.61 \\ Ours Mirror A-NeRF) & 48.84 & 57.82 \\ Ours (Mirror A-NeRF w/ occlusion) & **48.51** & **57.32** \\ \hline \end{tabular}
\end{table}
Table 4: **Pose refinement on _MirrorHuman-eval_ dataset [5] on videos with occlusions**. With occlusion handling, Mirror-Aware Neural Humans improves upon A-NeRF [40].
Figure 8: **Neural body reconstruction** on three different subjects from the additional qualitative sequences, including loose clothing, challenging poses, and casual day-to-day motion. Our Mirror-Aware Neural Humans reconstructs the body details considerably benefiting from the mirror model and skeleton.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Components removed from model & N-MPJPE \(\downarrow\) & PA-MPJPE \(\downarrow\) \\ \hline Base w/o smooth. and feet constr. & 99.94 & 63.03 \\ w/o location smooth. & 93.79 & 57.98 \\ w/o orientation smooth. & 92.88 & 57.93 \\ w/o feet constraint & 84.73 & 54.06 \\ \hline Full objective & **62.45** & **44.15** \\ \hline \end{tabular}
\end{table}
Table 3: **Ablation study** on the optimal regularization configuration to reduce the influence of noisy detections. All our contributions improve on the baseline.
## 6 Conclusion
Our method reconstructs a 3D neural body model from mirror images by treating the mirror as a second camera and calibrating the camera extrinsics and mirror geometry directly from the motion of people detected in 2D. This alleviates manual annotation and initializing with pre-trained models, which lets us reconstruct difficult and complex human performances for which existing approaches struggle.
Mirror-Aware Neural Humans let anyone with a mirror and camera reconstruct a full 3D human model. In particular, we foresee low-cost medical applications, such as mirror-based pose estimation for rehabilitation [23].
|
2309.04988 | Analysis of fractional Cauchy problems with some probabilistic
applications | In this paper we give an explicit solution of Dzherbashyan-Caputo-fractional
Cauchy problems related to equations with derivatives of order $\nu k$, for $k$
non-negative integer and $\nu>0$. The solution is obtained by connecting the
differential equation with the roots of the characteristic polynomial and it is
expressed in terms of Mittag-Leffler-type functions. Under the some stricter
hypothesis the solution can be expressed as a linear combination of
Mittag-Leffler functions with common fractional order $\nu$. We establish a
probabilistic relationship between the solutions of differential problems with
order $\nu/m$ and $\nu$, for natural $m$. Finally, we use the described method
to solve fractional differential equations arising in the fractionalization of
partial differential equations related to the probability law of planar random
motions with finite velocities. | Fabrizio Cinque, Enzo Orsingher | 2023-09-10T10:38:38Z | http://arxiv.org/abs/2309.04988v1 | # Analysis of fractional Cauchy problems with some probabilistic applications
###### Abstract
In this paper we give an explicit solution of Dzherbashyan-Caputo-fractional Cauchy problems related to equations with derivatives of order \(\nu k\), for \(k\) non-negative integer and \(\nu>0\). The solution is obtained by connecting the differential equation with the roots of the characteristic polynomial and it is expressed in terms of Mittag-Leffler-type functions. Under the some stricter hypothesis the solution can be expressed as a linear combination of Mittag-Leffler functions with common fractional order \(\nu\). We establish a probabilistic relationship between the solutions of differential problems with order \(\nu/m\) and \(\nu\), for natural \(m\). Finally, we use the described method to solve fractional differential equations arising in the fractionalization of partial differential equations related to the probability law of planar random motions with finite velocities.
_Keywords:_ Dzherbashyan-Caputo derivative, Mittag-Leffler functions, Fourier transforms, Laplace transforms, Random motions.
_2020 MSC:_ Primary 34A08; Secondary 35R11, 60K99.
## 1 Introduction
In this paper we consider fractional equations of the form
\[\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{\partial t^{\nu k}}F(t,x)=0, \ \ t\geq 0,\ x\in\mathbb{R},\ \ \mbox{with}\ \ \nu>0, \tag{1.1}\]
where the roots of \(\sum_{k=0}^{N}\lambda_{k}y^{k}=0\) are different from \(0\), and subject to the general initial conditions
\[\frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in\mathbb{R}^{d},\ l=0,\ldots,\lceil N\nu\rceil-1. \tag{1.2}\]
The fractional derivatives are in the sense of Dzherbashyan-Caputo, that is, for \(m\in\mathbb{N}_{0}\),
\[\frac{\mathrm{d}^{\nu}}{\mathrm{d}t^{\nu}}f(t)=\left\{\begin{array}{ll}\frac{1} {\Gamma(m-\nu)}\int_{0}^{t}(t-s)^{m-\nu-1}\frac{\mathrm{d}^{m}}{\mathrm{d}s^{m} }f(s)\,\mathrm{d}s&\mbox{ if }m-1<\nu<m\\ \frac{\mathrm{d}^{m}}{\mathrm{d}t^{m}}f(t)&\mbox{ if }\nu=m.\end{array}\right. \tag{1.3}\]
We recall that the Laplace-transform of the fractional derivative of order \(\nu>0\) can be expressed as, for suitable \(\mu>0\),
\[\int_{0}^{\infty}e^{-\mu t}\frac{\partial^{\nu}}{\partial t^{\nu}}f(t)\, \mathrm{d}t=\mu^{\nu}\int_{0}^{\infty}e^{-\mu t}f(t)\,\mathrm{d}t-\sum_{l=1}^{ \lceil\nu\rceil}\mu^{\nu-l}\frac{\partial^{l-1}}{\partial t^{l-1}}f\Big{|}_{t =0} \tag{1.4}\]
where we assume that \(\lim_{t\longrightarrow\infty}e^{-\mu t}\frac{\partial^{l-1}}{\partial t^{l-1} }f(t)=0,\ l\geq 1\).
Dzherbashyan-Caputo fractional derivatives and the associated Cauchy problems have been intensively studied by many authors in the last decades, see for instance [2, 11, 20] and the more recent papers such as [10, 15]. The main interest to such topic is arisen by their applications in several branches of science, such as physics and mechanics, see [9, 21].
Fractional derivatives and the study of its related Cauchy problems also appear in the theory of stochastic processes. The main novelty of this work lies in the probabilistic relationship we establish between the solution of fractional Cauchy problems of different order and its application to the study of the fractional version of random motion with finite velocity.
Our research aim is that of extending the results firstly presented in Orsingher and Beghin [17], where the authors studied the time-fractional telegraph equation and the probabilistic interpretation of the solution. In particular, they were also able to prove that the probability law of the telegraph process subordinated with a reflecting Brownian motion satisfies the time-fractional differential equation
\[\frac{\partial^{2\nu}u}{\partial t^{2\nu}}+2\lambda\frac{\partial^{\nu}u}{ \partial t^{\nu}}=c^{2}\frac{\partial^{2}u}{\partial x^{2}},\quad\mbox{with }\nu=\frac{1}{2},\]
subject to the initial condition \(u(0,x)=\delta(x)\) and \(u_{t}(0,x)=0,\ x\in\mathbb{R}\). Later, these kinds of relationships were extended in a series of papers, see [8, 16]. In particular, in the paper by Orsingher and Toaldo [18] the authors studied the time-space-fractional equation
\[\sum_{j=1}^{m}\lambda_{j}\frac{\partial^{\nu_{j}}u}{\partial t^{\nu_{j}}}=-c ^{2}(-\Delta)^{\beta},\quad 0<\nu_{j}\leq 1,\ \forall\ j,\ \beta\in(0,1], \tag{1.5}\]
subject to the initial condition \(u(0,x)=\delta(x),\ x\in\mathbb{R}^{d}\). In equation (1.5), \(-(-\Delta)^{\beta}\) denotes the fractional Laplacian (see [12] for further details on this operator). The authors proved the relationship between this kind of equations and the probability law of an isotropic \(d\)-dimensional stable process, \(S^{2\beta}\), subordinated with the inverse of a linear combination of independent stable processes, \(L(t)=\inf\{s\geq 0\,:\,\sum_{j=1}^{m}\lambda_{j}^{1/\nu_{j}}H_{\nu_{j}}(s)\geq t\}\), with \(\lambda_{j}>0\ \forall\ j\) and \(H_{\nu_{j}}\) stable processes of order \(\nu_{j}\in(0,1)\).
The novelty here is that the order of the Dzherbashyan-Caputo fractional derivatives appearing in (1.1) can be arbitrarily large, although of the form \(\nu k\). We point out that we state our main results in terms of ordinary fractional differential equations. Then, we are using this result to study partial fractional differential equations by means of the Fourier-transform approach.
In Section 3, thanks to the use of the Laplace transform method, we show that the solution of the fractional Cauchy problem given by (1.1) and (1.2) can be expressed as a combination of Mittag-Leffler-type functions with order of fractionality equal to \(\nu>0\). Then we connect the solutions of problems with different _order of fractionality_ by means of a probability expectation such as, with \(n\in\mathbb{N}\),
\[F_{\nu/n}(t,x)=\mathbb{E}\,F_{\nu}\bigg{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x\bigg{)} \tag{1.6}\]
where \(F_{\nu/n}\) and \(F_{\nu}\) are respectively the solution of a problem of degree \(\nu/n\) and \(\nu\) with suitable initial conditions and \(G_{j}^{(n)}(t)\) are positive absolutely continuous random variables for each \(t\geq 0,\ j=1,\ldots,n-1\) (see Section 2.2 for details). The relationship (1.6), where \(F_{\nu/n}\) and \(F_{\nu}\) are Fourier transforms of probability laws, leads to the equivalence (in terms of finite-dimensional distributions) of two processes, with the second one being time-changed through \(\prod_{j=1}^{n-1}G_{j}^{(n)}(t)\).
The problem we study in this paper was inspired by the fractionalization of the higher order partial differential equations governing the probability distribution of the position of random motion moving with a finite number of velocities. For instance, the fourth-order equation
\[\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t }+\lambda^{2}\Big{)}\bigg{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{ \partial}{\partial t}-c^{2}\Big{(}\frac{\partial^{2}}{\partial x^{2}}+\frac{ \partial^{2}}{\partial y^{2}}\Big{)}\bigg{)}p+c^{4}\frac{\partial^{4}p}{ \partial x^{2}\partial y^{2}}=0, \tag{1.7}\]
which emerges in the analysis of a planar stochastic dynamics with orthogonal-symmetrically chosen directions (see [4] for more details). The Fourier transforms of the equation (1.7) has the form
\[\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t }+\lambda^{2}\Big{)}\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{ \partial}{\partial t}+c^{2}(\alpha^{2}+\beta^{2})\Big{)}F+c^{4}\alpha^{2} \beta^{2}F=0, \tag{1.8}\]
and its fractional version, with \(\nu>0\), is
\[\frac{\partial^{4\nu}F}{\partial t^{4\nu}}+4\lambda\frac{\partial^{3\nu}F}{ \partial t^{3\nu}}+5\lambda^{2}\frac{\partial^{2\nu}F}{\partial t^{2\nu}}+2 \lambda^{3}\frac{\partial^{\nu}F}{\partial t^{\nu}}+c^{2}(\alpha^{2}+\beta^{ 2})\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{ \partial t}+\lambda^{2}\Big{)}F+c^{4}\alpha^{2}\beta^{2}F=0. \tag{1.9}\]
Equation (1.9) equivalently arises by considering the Fourier transform of the time-fractional version of equation (1.7).
In the last section of the paper we describe some applications of the theory constructed in Section 3 in the field of random motions with finite velocities. In detail, we study a method to derive the value of the (integer) derivatives of the Fourier transform (also called the characteristic function), in the time origin \(t=0\), of the probability law of the position
of the moving particle. Thanks to this result we can build the Cauchy problem solved by the characteristic function of a general motion and study its time-fractional counterpart. We provide two examples concerning planar random movements.
## 2 Preliminary concepts
### Convolutions of Mittag-Leffler-type functions
The generalized Mittag-Leffler (GML) function, also known as three-parameter Mittag-Leffler fucntion, is a generalization of the exponential function. It has been first introduced by Prabhakar [22] and is defined as
\[E^{\gamma}_{\nu,\delta}(x)=\sum_{k=0}^{\infty}\frac{\Gamma(\gamma+k)}{\Gamma( \gamma)\,k!}\frac{x^{k}}{\Gamma(\nu k+\delta)},\ \ \ \ \ \nu,\gamma,\delta\in\mathbb{C},Re(\nu),Re(\gamma),Re(\delta)>0,\ x\in \mathbb{R}. \tag{2.1}\]
By considering \(\gamma=1\), (2.1) reduces to the well-known Mittag-Leffler function, see Pillai [19], Gorenflo _et al._[9].
In this paper, as in many others, we are representing the solutions of fractional differential Cauchy problems in terms of Mittag-Leffler-type functions. These applications naturally appear in the fractional calculus, see Mainardi [14].
For our work it is useful to recall the Laplace transform of function (2.1),
\[\int_{0}^{\infty}e^{-\mu x}x^{\delta-1}E^{\gamma}_{\nu,\delta}(\beta x^{\nu} )\,\mathrm{d}x=\frac{\mu^{\nu\gamma-\delta}}{(\mu^{\nu}-\beta)^{\gamma}},\ \ \ \Big{|}\frac{\mu^{\nu}}{\beta}\Big{|}<1. \tag{2.2}\]
Let \(M\in\mathbb{N}\). Below we use the following multivariate analogue of the generalized Mittag-Leffler
\[E^{\gamma}_{\nu,\delta}(x)=\sum_{k_{1},\ldots,k_{M}=0}^{\infty}\,\prod_{j=1}^ {M}\frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j})\,k_{j}!}\,x_{j}^{k_{j}} \,\frac{1}{\Gamma\big{(}\nu\sum_{j=1}^{M}k_{j}+\delta\big{)}}, \tag{2.3}\]
where \(\gamma=(\gamma_{1},\ldots,\gamma_{M})\in\mathbb{C}^{M},\ \nu,\delta\in\mathbb{C}\), with \(Re(\gamma_{1}),\ldots,Re(\gamma_{M}),Re(\nu)>0\), and \(x\in\mathbb{C}^{M}\). Function (2.3) is a particular case of the multivariate Mittag-Leffler introduced by Saxena _et al._[23] and used in Cinque [3] to represent the distribution of the sum of independent generalized Mittag-Leffler random variables.
**Lemma 2.1**.: _Let \(M\in\mathbb{N}\) and \(t\geq 0\). Also assume that \(\gamma_{1},\ldots,\gamma_{M}\in\mathbb{C},\ \nu,\delta\in\mathbb{C}\setminus\{0\}\) such that \(Re(\gamma_{1}),\ldots,Re(\gamma_{M}),Re(\nu)>0\)and \(\eta_{1}\neq\cdots\neq\eta_{M}\in\mathbb{C}\). Then,_
\[\left(\mathop{\makebox[0.0pt]{$\times$}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: It is sufficient to show that for \(n\in\mathbb{N}\) and suitable \(\nu,\delta_{0},\delta,\gamma_{0},\ldots,\gamma_{n},\eta_{0},\ldots,\eta_{n},\)
\[\Bigg{(}x^{\delta_{0}-1}E_{\nu,\delta_{0}}^{\gamma_{0}}(\eta_{0}x^{\nu})*E_{\nu, \delta}^{(\gamma_{1},\ldots,\gamma_{n})}(\eta_{1}x^{\nu},\ldots,\eta_{n}x^{\nu })\Bigg{)}(t)=t^{\delta_{0}+\delta-1}E_{\nu,\delta_{0}+\delta}^{(\gamma_{0}, \gamma_{1},\ldots,\gamma_{n})}(\eta_{0}t^{\nu},\eta_{1}t^{\nu},\ldots,\eta_{n} t^{\nu}).\]
Indeed,
\[\Bigg{(}x^{\delta_{0}-1}E_{\nu,\delta_{0}}^{\gamma_{0}}(\eta_{0}x ^{\nu})*E_{\nu,\delta}^{(\gamma_{1},\ldots,\gamma_{n})}(\eta_{1}x^{\nu},\ldots,\eta_{n}x^{\nu})\Bigg{)}(t)\] \[\quad=\sum_{k_{0}=0}^{\infty}\frac{\Gamma(\gamma_{0}+k_{0})}{ \Gamma(\gamma_{0})\,k_{0}!}\eta_{0}^{k_{0}}\sum_{k_{1},\ldots,k_{n}=0}^{ \infty}\Bigg{(}\prod_{j=1}^{n}\frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j })\,k_{j}!}\eta_{j}^{k_{j}}\Bigg{)}\int_{0}^{t}\frac{(t-x)^{\nu k_{0}+\delta_{ 0}-1}x^{\nu\sum_{j=1}^{n}k_{j}+\delta-1}}{\Gamma(\nu k_{0}+\delta_{0})\Gamma( \nu\sum_{j=1}^{n}k_{j}+\delta)}\,\mathrm{d}x\] \[\quad=\sum_{k_{0}=0}^{\infty}\frac{\Gamma(\gamma_{0}+k_{0})}{ \Gamma(\gamma_{0})\,k_{0}!}\eta_{0}^{k_{0}}\sum_{k_{1},\ldots,k_{n}=0}^{ \infty}\Bigg{(}\prod_{j=1}^{n}\frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j })\,k_{j}!}\eta_{j}^{k_{j}}\Bigg{)}\frac{t^{\nu\sum_{j=0}^{n}k_{j}+\delta_{0}+ \delta-1}}{\Gamma(\nu\sum_{j=0}^{n}k_{j}+\delta_{0}+\delta)}\] \[\quad=\sum_{k_{0},\ldots,k_{n}=0}^{\infty}\Bigg{(}\prod_{j=0}^{n} \frac{\Gamma(\gamma_{j}+k_{j})}{\Gamma(\gamma_{j})\,k_{j}!}\Big{(}\eta_{j}t^{ \nu}\Big{)}^{k_{j}}\Bigg{)}\frac{t^{\delta_{0}+\delta-1}}{\Gamma(\nu\sum_{j= 0}^{n}k_{j}+\delta_{0}+\delta)}.\]
For the convolution of \(M\) two-parameters Mittag-Leffler functions we can derive an expression in terms of a linear combination of \(M\) two-parameters Mittag-Leffler functions having all the same parameters.
**Proposition 2.1**.: _Let \(M\in\mathbb{N}\) and \(t\geq 0\). Also assume that \(\gamma_{1},\ldots,\gamma_{M}\in\mathbb{C},\ \nu,\delta\in\mathbb{C}\setminus\{0\}\) such that \(Re(\gamma_{1}),\ldots,Re(\gamma_{M}),Re(\nu)>0\) and \(\eta_{1}\neq\cdots\neq\eta_{M}\in\mathbb{C}\). Then,_
\[\Bigg{(}\mathop{\hbox to 0.0pt{\vbox{\hrule width 100 {0.4pt\hbox{\vrule width 4.0pt height 6.0pt \kern 6.0pt\vrule width 0.0pt} \hrule width 100 \vrule width 0.0pt} } \hrule width 100 \vrule width 0.0pt}}\limits_{i=1}^{M}x^{\delta_{i}-1}E_{\nu,\delta_{i}}(\eta_{i}x^{ \nu})\Bigg{)}(t)=t^{\sum_{h=1}^{M}\delta_{h}-1}\sum_{i=1}^{M}\frac{ \eta_{i}^{M-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}E_{\nu,\sum_{h=1}^{M}\delta_{h} }\big{(}\eta_{i}t^{\nu}\big{)} \tag{2.5}\]
_where the convolution is performed with respect to the non-negative variable \(x\geq 0\)._
Proof.: First we recall that for \(n,M\in\mathbb{N}_{0}\) and \(\eta_{1}\neq\cdots\neq\eta_{N}\in\mathbb{C}\setminus\{0\}\),
\[\sum_{i=1}^{M}\frac{\eta_{i}^{n}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}=0,\ \ \text{with}\ \ n\leq M-2. \tag{2.6}\]
Then, we also note that the right-hand side of formula (2.5) can be also written as
\[t^{\sum_{h=1}^{M}\delta_{h}-1}\sum_{i=1}^{M}\frac{\eta_{i}^{M-1}}{\prod_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}E_{\nu,\sum_{h=1}^{M}\delta_{h} }\big{(}\eta_{i}t^{\nu}\big{)}=\sum_{k=0}^{\infty}\frac{t^{\nu k+\sum_{h=1}^{ M}\delta_{h}-1}}{\Gamma(\nu k+\sum_{h=1}^{M}\delta_{h})}\sum_{i=1}^{M}\frac{\eta_{i}^{k+M -1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}. \tag{2.7}\]
We now proceed by induction. The induction base (M=2) can be found in Orsingher and Beghin [17]. Now, assume that (2.5) holds for \(M-1\).
\[\left(\begin{array}{l}\mathop{\mbox{\Large$\times$}}\limits_{i=1}^{M}x^{ \delta_{i}-1}E_{\nu,\delta_{i}}(\eta_{i}x^{\nu})\right)(t)\] \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-2}}{\prod_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{M-1}(\eta_{i}-\eta_{j})}\int_{0}^{t}x^{\delta_{M}-1}E_{ \nu,\delta_{M}}\big{(}\eta_{M}x^{\nu}\big{)}(t-x)^{\sum_{h=1}^{M-1}\delta_{h}-1 }E_{\nu,\sum_{h=1}^{M-1}\delta_{h}}\Big{(}\eta_{i}(t-x)^{\nu}\Big{)}\,\mathrm{ d}x \tag{2.8}\] \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-1}}{\prod_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{M-1}(\eta_{i}-\eta_{j})}\sum_{k=0}^{\infty}\frac{t^{\nu k +\sum_{h=1}^{M}\delta_{h}-1}}{\Gamma\big{(}\nu k+\sum_{h=1}^{M}\delta_{h} \big{)}}\Big{(}\frac{\eta_{i}^{k+1}}{\eta_{i}-\eta_{M}}+\frac{\eta_{M}^{k+1}}{ \eta_{M}-\eta_{i}}\Big{)}\] \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-1}\,t^{\sum_{h=1}^{M} \delta_{h}-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}(\eta_{i}-\eta_{j})}\,E_{\nu,\sum_{h=1}^{M}\delta_{h} }\big{(}\eta_{i}t^{\nu}\big{)}\] \[\quad\quad-t^{\sum_{h=1}^{M}\delta_{h}-1}\,E_{\nu,\sum_{h=1}^{M} \delta_{h}}\big{(}\eta_{M}t^{\nu}\big{)}\,\eta_{M}\,\sum_{i=1}^{M-1}\,\frac{ \eta_{i}^{M-2}}{\prod_{\begin{subarray}{c}j\neq i\end{subarray}}^{M}(\eta_{i }-\eta_{j})}\] (2.9) \[\quad=\sum_{i=1}^{M-1}\frac{\eta_{i}^{M-1}\,t^{\sum_{h=1}^{M} \delta_{h}-1}}{\prod_{\begin{subarray}{c}j\neq i\end{subarray}}^{M}(\eta_{i}- \eta_{j})}\,E_{\nu,\sum_{h=1}^{M}\delta_{h}}\big{(}\eta_{i}t^{\nu}\big{)}+t^{ \sum_{h=1}^{M}\delta_{h}-1}\,E_{\nu,\sum_{h=1}^{M}\delta_{h}}\big{(}\eta_{M}t ^{\nu}\big{)}\frac{\eta_{M}^{M-1}}{\prod_{\begin{subarray}{c}j\neq i\end{subarray} }^{M}(\eta_{i}-\eta_{j})}.\]
where in step (2.8) we used the induction base (i.e. with \(M=2\)) written as in (2.7), and in step (2.9) we suitably used formula (2.6).
### Generalization of absolute normal distribution
In [1] the authors introduced the following absolutely continuous positively distributed random variables. Let \(n\in\mathbb{N}\) and \(y>0\),
\[P\{G_{j}^{(n)}(t)\in\mathrm{d}y\}=\frac{y^{j-1}\,\mathrm{d}y}{n^{\frac{j}{n-1} -1}t^{\frac{j}{n(n-1)}}\Gamma(j/n)}e^{-\frac{y^{n}}{(n^{n}t)^{\frac{1}{n-1}}}},\quad t>0,\ j=1,\ \ldots,n-1. \tag{2.10}\]
Note that in the case of \(n=2\) we have only one element and \(G_{1}^{(2)}(t)=|B(2t)|,\ t\geq 0\), with \(B\) being a standard Brownian motion.
If \(G_{1}^{(n)},\ldots,G_{n-1}^{(n)}\) are independent, then the joint density reads, with \(y_{1},\ldots,y_{n-1}>0\),
\[P\Bigg{\{}\bigcap_{j=1}^{n-1}\big{\{}G_{j}^{(n)}(t)\in\mathrm{d}y_{j}\big{\}} \Bigg{\}}=\Big{(}\frac{n}{2\pi}\Big{)}^{\frac{n-1}{2}}\frac{1}{\sqrt{t}}\Bigg{(} \prod_{j=1}^{n-1}y_{j}^{j-1}\,\,\mathrm{d}y_{j}\Bigg{)}\,e^{-(n^{n}t)^{\frac{- 1}{n-1}}\sum_{j=1}^{n-1}y_{j}^{n}}. \tag{2.11}\]
Let \(t>0\) and \(n\geq 2\). It is easy to derive that the Mellin-transform of distribution (2.10) reads, for \(s>0\),
\[\int_{0}^{\infty}y^{s-1}f_{G_{j}^{(n)}(t)}(y)\,\mathrm{d}y=\Big{(}nt^{1/n} \Big{)}^{\frac{s-1}{n-1}}\frac{\Gamma\big{(}\frac{s+j-1}{n}\big{)}}{\Gamma \big{(}\frac{j}{n}\big{)}},\quad j=1,\ldots,n-1.\]
In the independence case, the Mellin-transform of the density, \(f_{G^{(n)}(t)}\), of the product \(G^{(n)}(t)=\prod_{j=1}^{n-1}G^{(n)}_{j}(t)\), is, with \(s>0\),
\[\int_{0}^{\infty}y^{s-1}f_{G^{(n)}(t)}(y)\,\mathrm{d}y=\prod_{j=1}^{n-1}\Bigl{(} nt^{1/n}\Bigr{)}^{\frac{s-1}{n-1}}\frac{\Gamma\bigl{(}\frac{s+j-1}{n}\bigr{)}}{ \Gamma\bigl{(}\frac{s}{n}\bigr{)}}=\frac{t^{\frac{s-1}{n}}}{\Gamma\bigl{(} \frac{s-1}{n}+1\bigr{)}}\Gamma(s).\]
where in the last equality we used the following \(n\)-multiplication formula of Gamma function for \(z=1/n\) and \(s/n\),
\[\prod_{j=1}^{n-1}\Gamma\Bigl{(}z+\frac{j-1}{n}\Bigr{)}=\frac{(2\pi)^{\frac{n-1 }{2}}n^{\frac{1}{2}-nz}\,\Gamma(nz)}{\Gamma\Bigl{(}z+\frac{n-1}{n}\Bigr{)}}.\]
## 3 Fractional differential Cauchy problem
In this section we derive an explicit formula for the solution to the fractional Cauchy problem given by (1.1) and (1.2). Hereafter we are considering functions \(f:[0,\infty)\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) such that \(\lim_{t\longrightarrow\infty}e^{-\mu t}\frac{\partial^{l-1}}{\partial t^{l-1 }}f(t)=0\ \forall\ l\).
**Theorem 3.1**.: _Let \(d,N\in\mathbb{N},\ \nu>0\) and \(\lambda_{0},\dots,\lambda_{N}\in\mathbb{R}\). If_
\[\sum_{k=0}^{N}\lambda_{k}x^{k}=\prod_{j=1}^{M}(x-\eta_{j})^{m_{j}}\ \ \text{with}\ \ \eta_{1},\dots,\eta_{M}\in\mathbb{C}\setminus\{0\}, \tag{3.1}\]
_then, the solution to the fractional Cauchy problem of parameter \(\nu\)_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R}^{d}\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in\mathbb{R}^{d},\ l=0, \dots,\lceil\nu N\rceil-1,\end{cases} \tag{3.2}\]
_is the function \(F:[0,\infty)\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) given by_
\[F(t,x)=\sum_{l=0}^{\lceil\nu N\rceil-1}f_{l}(x)\sum_{k=k_{l}}^{N}\lambda_{k}\, t^{\nu(N-k)+l}\,E^{(m_{1},\dots,m_{M})}_{\nu,\,\nu(N-k)+l+1}\Bigl{(}\eta_{1}t^{ \nu},\dots,\eta_{M}t^{\nu}\Bigr{)}, \tag{3.3}\]
_with \(k_{l}=\min\{k=1,\dots,N\,:\,\nu k>l\},\ l=0,\dots,\lceil\nu N\rceil-1\)._
Note that \(k_{0}=1\) and \(l-1<\nu k\leq l\) for all \(k_{l-1}\leq k<k_{l}\). Formula (3.3) can be also written inverting the sums into \(\sum_{k=1}^{N}\sum_{l=0}^{\lceil\nu k\rceil-1}\).
Condition (3.1) implies that \(\eta_{1},\dots,\eta_{M}\) are the \(M\) roots of the \(N\)-th order polynomial with coefficients \(\lambda_{0},\dots,\lambda_{N}\), respectively with algebraic motleplicity \(m_{1},\dots,m_{M}\geq 1\). In the case \(M=N\), all the roots have algebraic motleplicity equal to \(1\) and the solution can be expressed in terms of a combination of Mittag-Leffler functions (see Theorem 3.3).
Proof.: By means of the \(t\)-Laplace transform, the differential equation in problem (3.2) turns into, for \(\mu\geq 0\) (we use the notation \(G(\mu,x)=\mathcal{L}(F)(\mu,x)=\int_{0}^{\infty}e^{-\mu t}F(t,x)\,\mathrm{d}t\) and keep in mind formula (1.4))
\[0=\mathcal{L}\!\left(\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F\right)=\sum_{k=0}^{N}\lambda_{k}\,\mathcal{L}\!\left( \frac{\partial^{\nu k}}{\partial t^{\nu k}}F\right)=\lambda_{0}G+\sum_{k=1}^{N }\lambda_{k}\Big{[}\mu^{\nu k}G-\sum_{l=1}^{[\nu k]}\mu^{\nu k-l}f_{l-1}\Big{]},\]
which gives
\[G(\mu,x)=\frac{\sum_{k=1}^{N}\lambda_{k}\sum_{l=1}^{[\nu k]}\mu^ {\nu k-l}f_{l-1}(x)}{\sum_{k=0}^{N}\lambda_{k}\mu^{\nu k}}=\frac{ \sum_{l=1}^{[\nu N]}f_{l-1}(x)\sum_{k=k_{l-1}}^{N}\!\lambda_{k}\, \mu^{\nu k-l}}{\prod_{h=1}^{M}\!\big{(}\mu^{\nu}-\eta_{h}\big{)}^{ m_{h}}}, \tag{3.4}\]
where we used hypothesis (3.1) and \(k_{l-1}\) is defined in the statement.
We now compute the \(\mu\)-Laplace inverse of the functions \(\mu^{\nu k-l}/\prod_{h=1}^{M}\!\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}\), for \(l=1,\ldots,\lceil N\nu\rceil\) and \(k=k_{l-1},\ldots,N\), by properly applying formula (2.2). Let us consider \(M_{k}=\min\{n\,:\,\sum_{h=1}^{n}m_{h}\geq k\}\), therefore \(M_{1}=1\) because \(m_{1}\geq 1\) and \(M_{N}=M\). Clearly, \(\sum_{h=1}^{M_{k}}m_{h}\geq k>\sum_{h=1}^{M_{k}-1}m_{h}\) and \(m_{M_{k}}\geq k-\sum_{h=1}^{M_{k}-1}m_{h}\); clearly \(\sum_{h=1}^{M}m_{h}=N\). We can decompose \(\nu k-l\) as follows (it is not the only way):
\[\nu k-l =\nu\Big{(}\sum_{h=1}^{M_{k}-1}m_{h}+k-\sum_{h=1}^{M_{k}-1}m_{h} \Big{)}-l\,\frac{\sum_{h=1}^{M}m_{h}}{N}\] \[=\nu\sum_{h=1}^{M_{k}-1}m_{h}+\nu\Big{(}k-\sum_{h=1}^{M_{k}-1}m_ {h}\pm\sum_{h=M_{k}}^{M}m_{h}\Big{)}-l\,\frac{\sum_{h=1}^{M}m_{h}}{N}\] \[=\sum_{h=1}^{M_{k}-1}\Big{(}\nu m_{h}-l\frac{m_{h}}{N}\Big{)}+ \Bigg{[}\nu m_{M_{k}}-\Big{(}\nu m_{M_{k}}-\nu k+\nu\sum_{h=1}^{M_{k}-1}m_{h} +l\frac{m_{M_{k}}}{N}\Big{)}\Bigg{]}\] \[\quad+\sum_{h=M_{k}+1}^{M}\Bigg{[}\nu m_{h}-\Big{(}\nu m_{h}+l \frac{m_{h}}{N}\Big{)}\Bigg{]}. \tag{3.5}\]
In view of (3.5) we can write, by denoting with \(\mathcal{L}^{-1}\) the inverse \(\mu\)-Laplace transform operator, for \(l=1,\ldots,\lceil N\nu\rceil\) and \(k=k_{l-1},\ldots,N\)
\[\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu k-l}}{\prod_{h=1}^{M}\! \big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\Bigg{)}(t)\] \[\quad=\prod_{h=1}^{M_{k}-1}\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu m _{h}-lm_{h}/N}}{\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\Bigg{)}(t)\,\mathcal{L }^{-1}\Bigg{(}\frac{\mu^{\nu m_{M_{k}}-\big{(}\nu\sum_{h=1}^{M_{k}}m_{h}-\nu k+lm_{M_{k}}/N\big{)}}}{\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{M_{k}}}} \Bigg{)}(t)\] \[\quad\quad\times\prod_{h=M_{k}+1}^{M}\mathcal{L}^{-1}\Bigg{(} \frac{\mu^{-lm_{h}/N}}{\big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\Bigg{)}(t) \tag{3.6}\]
\[=\] \[= t^{\nu(N-k)+l-1}E^{(m_{1},\dots,m_{M})}_{\nu,\,\nu(N-k)+l}\big{(}\eta_ {1}t^{\nu},\dots,\eta_{M}t^{\nu}\big{)}, \tag{3.7}\]
where in step (3.6) we used (2.2) and in the last step we used Lemma 2.1. Note that in step (3.6) it is necessary to keep the "\(\delta\)" terms greater than \(0\) (see (2.2)) and this is the main reason of using the above decomposition of \(\nu k-l\).
By combining (3.4) and (3.7) we readily obtain result (3.3) (after the change of variable \(l^{\prime}=l-1\)).
**Remark 3.1** (Non-homogeneous equation).: Under the hypothesis of Theorem 3.1 we can easily study the Cauchy problem in the case of a non-homogeneous fractional equation. In details, for \(g:\mathbb{R}\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\), such that there exists the \(t\)-Laplace transform, the solution of
\[\left\{\begin{aligned} &\sum_{k=0}^{N}\lambda_{k}\frac{ \partial^{\nu k}}{\partial t^{\nu k}}F(t,x)=g(t,x),\ \ t\geq 0,\ x\in\mathbb{R}^{d}\\ &\frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in\mathbb{R}^{d},\ l=0,\dots,\lceil\nu N\rceil-1,\end{aligned}\right.\]
reads
\[F(t,x) \tag{3.8}\] \[=\sum_{l=0}^{\lceil N\nu\rceil-1}f_{l}(x)\sum_{k=k_{l}}^{N} \lambda_{k}\,t^{\nu(N-k)+l}\,E^{(m_{1},\dots,m_{M})}_{\nu,\,\nu(N-k)+l+1} \big{(}\eta t^{\nu}\big{)}-\int_{0}^{t}g(t-y,x)\,y^{\nu N-1}E^{(m_{1},\dots,m_ {M})}_{\nu,\nu N}\big{(}\eta y^{\nu}\big{)}\,\mathrm{d}y,\]
where \(\eta=(\eta_{1},\dots,\eta_{M})\).
The above results easily follows by observing that formula (3.4) becomes
\[G(\mu,x)=\frac{\sum_{l=1}^{\lceil N\nu\rceil}f_{l-1}(x)\sum_{k=k_{l-1}}^{N} \lambda_{k}\,\mu^{\nu k-l}-\mathcal{L}(g)(\mu,x)}{\prod_{h=1}^{M} \big{(}\mu^{\nu}-\eta_{h}\big{)}^{m_{h}}}\]
and we observe that the \(\mu\)-Laplace inverse of the term concerning the function \(g\) is
\[\mathcal{L}^{-1}\Bigg{(}\mathcal{L}(g)(\mu,x)\Big{(}\prod_{h=1}^{M}\big{(}\mu ^{\nu}-\eta_{h}\big{)}^{m_{h}}\Big{)}^{-1}\Bigg{)}(t,x)=\int_{0}^{t}g(t-y,x) \,y^{\nu N-1}E^{(m_{1},\dots,m_{M})}_{\nu,\nu N}\big{(}\eta y^{\nu}\big{)}\, \mathrm{d}y\]
where we used that \(\mathcal{L}^{-1}\Big{(}\Big{(}\prod_{h=1}^{M}\big{(}\mu^{\nu}-\eta_{h}\big{)} ^{m_{h}}\Big{)}^{-1}\Big{)}(t,x)=t^{\nu N-1}E^{(m_{1},\dots,m_{M})}_{\nu,\nu N} \big{(}\eta t^{\nu}\big{)}\) (obtained by proceeding as shown for (3.7)).
Note that in the case of \(g\) being constant with respect to the variable \(t\), the last term of (3.8) reads \(-g(x)t^{\nu N}E^{(m_{1},\dots,m_{M})}_{\nu,\nu N+1}\big{(}\eta t^{\nu}\big{)}\).
**Remark 3.2**.: Consider the real sequence \(\{\nu_{n}\}_{n\in\mathbb{N}}\) such that \(\nu_{n}\longrightarrow\nu>0\) and \(\lceil\nu_{n}N\rceil=\lceil\nu N\rceil>0\ \forall\ n\). Then,
\[F_{\nu}(t,x)=\lim_{n\to\infty}F_{\nu_{n}}(t,x),\ \ t\geq 0,\ x\in\mathbb{R}^{d}, \tag{3.9}\]
where \(F_{\nu},F_{\nu_{n}}\) are respectively the solutions to the problem of parameter \(\nu\) and \(\nu_{n}\ \forall\ n\), with the same initial conditions. This means that we can connect the limit of the solutions (pointwise) to the "limit" of the Cauchy problems (where the initial conditions stay the same because \(\lceil\nu_{n}N\rceil=\lceil\nu N\rceil\ \forall\ n\)).
Result (3.9) comes from the continuity of the function (2.3) with respect to the fractional parameter \(\nu>0\). This can be seen as a consequence of the continuity of the Gamma function on the real half-line and a suitable application of the dominated convergence theorem.
**Theorem 3.2**.: _Let \(d,N,n\in\mathbb{N},\ \nu>0\). Let \(\lambda_{0},\ldots,\lambda_{N}\in\mathbb{R}\) and \(\eta_{1},\ldots,\eta_{M}\in\mathbb{C}\setminus\{0\}\) satisfying condition (3.1). Then, the solution \(F_{\nu/n}\) of the problem of parameter \(\nu/n\)_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k/n}}{ \partial t^{\nu k/n}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R},\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in \mathbb{R}^{d},\ l=0,\ldots,\left\lceil\frac{N\nu}{n}\right\rceil-1,\end{cases} \tag{3.10}\]
_can be expressed as_
\[F_{\nu/n}(t,x)=\mathbb{E}\,F_{\nu}\bigg{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x \bigg{)}, \tag{3.11}\]
_where the \(G_{j}^{(n)}(t)\) are the random variables introduced in Section 2.2 and \(F_{\nu}\) is the solution to a problem of parameter \(\nu\) with suitable initial condition_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R}\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=\begin{cases}f_{l},\ \ l=ln,\ \text{with}\ h=0,\ldots,\lceil N\nu/n\rceil-1,\\ 0,\ \ otherwise.\end{cases}\end{cases} \tag{3.12}\]
Note that the conditions of the problem (3.10) of degree \(\nu/n\) appear in the associated problem (3.12) in the derivative whose order is multiple of \(n\), while the other initial conditions are assumed equal to \(0\). We also point out that all the conditions of the original problem always appear in the related problem since \(n\Big{(}\lceil\nu N/n\rceil-1\Big{)}\leq\lceil\nu N\rceil-1\).
Proof.: We begin by showing a possible way to express the multivariate Mittag-Leffler (2.3) of fractional order \(\nu/n\) in terms of that of fractional order \(\nu\). Remember that for the gamma function, with \(z\in\mathbb{C}\) and \(n\in\mathbb{N}\) we can write (thanks to the \(n\)-multiplication formula of the Gamma function)
\[\Gamma\Big{(}z+\frac{n-1}{n}\Big{)}^{-1}=\frac{\prod_{j=1}^{n-1}\Gamma\Big{(} z+\frac{j-1}{n}\Big{)}}{(2\pi)^{\frac{n-1}{2}}n^{\frac{1}{2}-nz}\Gamma(nz)}\]
\[=\frac{1}{(2\pi)^{\frac{n-1}{2}}n^{\frac{1}{2}-nz}\Gamma(nz)}\prod_{j=1} ^{n-1}\int_{0}^{\infty}e^{-w_{j}}w_{j}^{z+\frac{j-1}{n}-1}\,\mathrm{d}w_{j}. \tag{3.13}\]
Let \(x\in\mathbb{C}^{M}\) and \(L,h>0\),
\[E_{\frac{\nu}{n},\frac{\nu}{n},\frac{\nu}{n}L+h}^{(m_{1},\dots,m _{M})}(x) =\sum_{k_{1},\dots,k_{M}=0}\Biggl{(}\prod_{j=1}^{M}\frac{\Gamma(m_{j}+ k_{j})}{\Gamma(m_{j})\,k_{j}!}x_{j}^{k_{j}}\Biggr{)}\Gamma\Biggl{(}\frac{\nu}{n} \sum_{j=1}^{M}k_{j}+\frac{\nu}{n}L+h\Biggr{)}^{-1} \tag{3.14}\] \[=\sum_{k_{1},\dots,k_{M}=0}\Biggl{(}\prod_{j=1}^{M}\frac{\Gamma( m_{j}+k_{j})}{\Gamma(m_{j})\,k_{j}!}x_{j}^{k_{j}}\Biggr{)}\frac{\Gamma \Bigl{(}\nu\sum_{j=1}^{M}k_{j}+\nu L+nh-(n-1)\Bigr{)}^{-1}}{(2\pi)^{\frac{n-1}{ 2}}n^{\frac{1}{2}-\bigl{(}\nu\sum_{k=1}^{M}k_{h}+\nu L+nh-(n-1)\bigr{)}}}\] \[\quad\times\prod_{j=1}^{n-1}\int_{0}^{\infty}e^{-w_{j}}w_{j}^{ \frac{1}{n}\bigl{(}\nu\sum_{k=1}^{M}k_{h}+\nu L+nh-n+j\bigr{)}-1}\,\mathrm{d}w _{j}\] \[=\frac{n^{\nu L+n(h-1)+1/2}}{(2\pi)^{\frac{n-1}{2}}}\int_{0}^{ \infty}\cdots\int_{0}^{\infty}\prod_{j=1}^{n-1}e^{-w_{j}}w_{j}^{\frac{1}{n} \bigl{(}\nu L+nh-n+j\bigr{)}-1}\,\mathrm{d}w_{j}\] \[\quad\times E_{\nu,\,\nu L+n(h-1)+1}^{(m_{1},\dots,m_{M})}\Bigl{(} x\,n^{\nu}\prod_{j=1}^{n-1}w_{j}^{\nu/n}\Bigr{)} \tag{3.15}\]
where in (3.14) we used (3.13) with \(z=\frac{\nu}{n}\sum_{j=1}^{M}k_{j}+h+\frac{\nu}{n}L-\frac{n-1}{n}\).
Now we apply (3.15) (with \(h=l+1\) and \(L=N-k\)) to formula (3.3) and derive result (3.11). Let us consider \(\eta=(\eta_{1},\dots,\eta_{M})\) given in the hypotheses, then
\[F_{\nu/n}(t,x) =\sum_{l=0}^{\left\lceil\frac{\nu N}{n}\right\rceil-1}f_{l}(x) \sum_{k=k_{l}}^{N}\lambda_{k}\,t^{\frac{\nu}{n}(N-k)+l}\,E_{\frac{\nu}{n},\frac {\nu}{n}(N-k)+l+1}^{(m_{1},\dots,m_{M})}\Bigl{(}\eta_{1}t^{\nu/n},\dots,\eta_{M} t^{\nu/n}\Bigr{)}\] \[=\sum_{l=0}^{\left\lceil\frac{\nu N}{n}\right\rceil-1}f_{l}(x) \sum_{k=k_{l}}^{N}\lambda_{k}\,t^{\frac{\nu}{n}(N-k)+l}\,\frac{n^{\nu(N-k)+ nl+1/2}}{(2\pi)^{\frac{n-1}{2}}}\int_{0}^{\infty}\cdots\int_{0}^{\infty} \Biggl{(}\prod_{j=1}^{n-1}\mathrm{d}w_{j}\Biggr{)}\] \[\quad\times\Biggl{(}\prod_{j=1}^{n-1}e^{-w_{j}}\Biggr{)}\Biggl{(} \prod_{j=1}^{n-1}w_{j}^{\frac{1}{n}\bigl{(}\nu(N-k)+nl-n+j\bigr{)}-1}\Biggr{)}E _{\nu,\,\nu(N-k)+nl+1}^{(m_{1},\dots,m_{M})}\Biggl{(}\eta\Bigl{(}nt^{1/n}\prod_ {j=1}^{n-1}w_{j}^{1/n}\Bigr{)}^{\nu}\Biggr{)}\] \[=\Bigl{(}\frac{n}{2\pi}\Bigr{)}^{\frac{n-1}{2}}\frac{1}{\sqrt{t}} \int_{0}^{\infty}\cdots\int_{0}^{\infty}\Biggl{(}\prod_{j=1}^{n-1}\mathrm{d}y_{ j}\Biggr{)}\Biggl{(}\prod_{j=1}^{n-1}y_{j}^{j-1}\Biggr{)}\Biggl{(}\prod_{j=1}^{n-1}e^{- \frac{y_{j}^{n}}{(n^{n}t)^{\frac{1}{n-1}}}}\Biggr{)}\] \[\quad\times\sum_{l=0}^{\left\lceil\frac{\nu N}{n}\right\rceil-1}f _{l}(x)\sum_{k=k_{l}}^{N}\lambda_{k}\Biggl{(}\prod_{j=1}^{n-1}y_{j}\Biggr{)}^{ \nu(N-k)+nl}E_{\nu,\,\nu(N-k)+nl+1}^{(m_{1},\dots,m_{M})}\Biggl{(}\eta\prod_{j=1 }^{n-1}y_{j}^{\nu}\Biggr{)} \tag{3.16}\]
where in the last step we used the change of variables
\[nt^{1/n}\prod_{j=1}^{n-1}w_{j}^{1/n}=\prod_{j=1}^{n-1}y_{j}\iff w_{j}=\frac{y_{ j}^{n}}{\bigl{(}n^{n}t\bigr{)}^{\frac{1}{n-1}}},\,\,\forall\,\,j\,\Longrightarrow \,\prod_{j=1}^{n-1}\mathrm{d}w_{j}=\frac{\prod_{j=1}^{n-1}\mathrm{d}y_{j}\,y_{j}^ {n-1}}{nt}\]
and we performed some simplifications.
At last, we show that the second line of (3.16) coincides with the time-changed solution \(F_{\nu}\Big{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x\Big{)}\) of the associated problem (3.12). Let us denote with \(\tilde{f}_{l}\) the function appearing in the \(l\)-th condition of problem (3.12) and with \(\tilde{k}_{l}=\min\{k=1,\dots,N\,:\,\nu k>l\}\) for \(l=0,\dots,\lceil\nu N\rceil-1\). Then, the solution of the related Cauchy problem reads
\[F_{\nu}(s,x)=\sum_{l=0}^{\lceil\nu N\rceil-1}\tilde{f}_{l}(x)\sum_{k=\tilde{k} _{l}}^{N}\lambda_{k}\,s^{\nu(N-k)+l}\,E_{\nu,\,\nu(N-k)+l+1}^{(m_{1},\dots,m_{ M})}\Big{(}\eta_{1}t^{\nu},\dots,\eta_{M}s^{\nu}\Big{)},\]
where the functions \(\tilde{f}_{l}\) are identically null for \(l\neq nh\) with \(h=0,\dots,\lceil\nu N/n\rceil-1\), therefore we can write (removing the indexes of the null terms and performing the change of variable \(l=nh\))
\[F_{\nu}(s,x)=\sum_{h=0}^{\lceil\frac{\nu N}{n}\rceil-1}\tilde{f}_{nh}(x)\sum_ {k=\tilde{k}_{nh}+1}^{N}\lambda_{k}\,s^{\nu(N-k)+nh}\,E_{\nu,\,\nu(N-k)+nh+1} ^{(m_{1},\dots,m_{M})}\Big{(}\eta_{1}s^{\nu},\dots,\eta_{M}s^{\nu}\Big{)}.\]
By observing that \(\tilde{k}_{nh}=\min\{k=1,\dots,N:\nu k>nh\}=\min\{k=1,\dots,N:\nu k/n>h\}=k_{h} \ \forall\ h\), we obtain the last line of (3.16) by setting \(s=\prod_{j=1}^{n-1}y_{j}\).
**Remark 3.3** (Brownian subordination).: If \(n=2\), formula (3.11) becomes
\[F_{\nu/2}(t,x)=\mathbb{E}\,F_{\nu}\Big{(}\,|B(2t)|,\,x\Big{)}, \tag{3.17}\]
with \(B\) standard Brownian motion (see Section 2.2).
Furthermore, by keeping in mind (3.17) and iterating the same argument, we obtain that
\[F_{\nu/2^{n}}(t,x)=\mathbb{E}\,F_{\nu}\Big{(}\,|B_{n}(2|B_{n-1}(2|\cdots 2|B_{1}(2t)| \cdots|\,)\,|\,)\,|,\,x\Big{)}, \tag{3.18}\]
where \(B_{1},\dots,B_{n}\) are independent standard Brownian motions and \(F_{\nu}\) solution of the associated problem of the form (3.12) with \(2^{n}\) replacing \(n\).
### Algebraic multiplicities equal to 1
In this section we restrict ourselves to the case where the characteristic polynomial in (3.1) has all distinct roots. This hypothesis permits us to present a more elegant result than that of Theorem 3.1.
**Theorem 3.3**.: _Let \(d,N\in\mathbb{N},\ \nu>0\) and \(\lambda_{0},\dots,\lambda_{N}\in\mathbb{R}\). If_
\[\sum_{k=0}^{N}\lambda_{k}x^{k}=\prod_{j=1}^{N}(x-\eta_{j})\ \ \text{with}\ \ \eta_{1},\dots,\eta_{N}\in\mathbb{C}\setminus\{0\}, \tag{3.19}\]
_then, the solution to the fractional Cauchy problem_
\[\begin{cases}\sum_{k=0}^{N}\lambda_{k}\frac{\partial^{\nu k}}{ \partial t^{\nu k}}F(t,x)=0,\ \ t\geq 0,\ x\in\mathbb{R}\\ \frac{\partial^{l}F}{\partial t^{l}}\Big{|}_{t=0}=f_{l}(x),\ \ x\in \mathbb{R}^{d},\ l=0,\dots,\lceil\nu N\rceil-1,\end{cases} \tag{3.20}\]
_is the function \(F:[0,\infty)\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) given by_
\[F(t,x)=\sum_{h=1}^{N}\sum_{l=0}^{[\nu N]-1}E_{\nu,l+1}\big{(}\eta_{h}t^{\nu} \big{)}f_{l}(x)\,t^{l}\sum_{k=k_{l}}^{N}\frac{\lambda_{k}\,\eta_{h}^{k-1}}{ \prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}, \tag{3.21}\]
_with \(k_{l}=\min\{k=1,\ldots,N\,:\,\nu k>l\},\ l=0,\ldots,\lceil\nu N\rceil-1\)._
Note that in result (3.21) the fractional order \(\nu\) influences only the fractional order of the Mittag-Leffler function (and the number of initial conditions), so the coefficients of the linear combination are constant (with respect to \(\nu\)). We point out that the series in (3.21) can be inverted becoming \(\sum_{k=1}^{N}\sum_{l=0}^{\lceil\nu k\rceil-1}\).
Proof.: First we note that, for \(n\in\mathbb{N}_{0}\) and \(l\in\mathbb{C}\),
\[E_{\nu,\,n\nu+l}(x)=\frac{E_{\nu,l}(x)}{x^{n}}-\sum_{j=1}^{n}\frac{x^{-j}}{ \Gamma\big{(}(n-j)\nu+l\big{)}}. \tag{3.22}\]
Now, we proceed as in the proof of Theorem 3.1 and we perform the \(t\)-Laplace transform of the equation in problem (3.20). In this case, formula (3.4) reads
\[G(\mu,x)=\frac{\sum_{l=1}^{\lceil\nu N\rceil}f_{l-1}(x)\sum_{k=k_{l-1}}^{N} \lambda_{k}\,\mu^{\nu k-l}}{\prod_{h=1}^{N}\big{(}\mu^{\nu}-\eta_{h} \big{)}}. \tag{3.23}\]
We now invert the functions \(\mu^{\nu k-l}/\prod_{h=1}^{N}\big{(}\mu^{\nu}-\eta_{h}\big{)}\) for \(l=1,\ldots,\lceil\nu N\rceil\) and \(k=k_{l-1}+1,\ldots,N\). We note that
\[\nu k-l=\nu-l\frac{N-k+1}{N}+(k-1)\Big{(}\nu-\frac{l}{N}\Big{)}\]
and therefore we write
\[\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu k-l}}{\prod_{h=1}^{N} \big{(}\mu^{\nu}-\eta_{h}\big{)}}\Bigg{)}(t)\] \[=\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu-l(N-k+1)/N}}{\mu^{\nu}- \eta_{1}}\Bigg{)}(t)\,\prod_{h=2}^{k}\mathcal{L}^{-1}\Bigg{(}\frac{\mu^{\nu-l /N}}{\mu^{\nu}-\eta_{h}}\Bigg{)}(t)\,\prod_{h=k+1}^{N}\mathcal{L}^{-1}\Big{(} \frac{1}{\mu^{\nu}-\eta_{h}}\Big{)}(t)\] \[=t^{l(N-k+1)/N-1}E_{\nu,l\frac{N-k+1}{N}}\big{(}\eta_{1}t^{\nu} \big{)}*\mathop{\mathchoice{\hbox{\hbox to 0.0pt{$\kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$ \kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt \lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$ \kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt \lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$ \kern 2.0pt\times$}\kern-3.0pt\lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.0pt \lower 2.0pt\hbox{$\kern 2.0pt\times$}\kern-3.
\[=\sum_{h=1}^{N}\frac{t^{l-1}\eta_{h}^{k-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}E_{\nu,l}\big{(}\eta_{h}t^{\nu} \big{)}-\sum_{i=1}^{N-k}\frac{t^{\nu(N-k-i)+l-1}}{\Gamma\big{(}\nu(N-k-i)+l-1 \big{)}}\sum_{h=1}^{N}\frac{\eta_{h}^{N-1-i}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})} \tag{3.26}\]
\[=\sum_{h=1}^{N}\frac{t^{l-1}\eta_{h}^{k-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}E_{\nu,l}\big{(}\eta_{h}t^{\nu} \big{)}, \tag{3.27}\]
where in step (3.24) we used Proposition 2.1, in step (3.25) we used (3.22) and changed the order of the sums in the second term, and in step (3.26) we used formula (2.6) (note that \(N-i-1\leq N-2\) for each \(i=1,\ldots,N-k\)). Finally, with formula (3.27) at hand, the inversion of (3.23) yields the claimed result (3.21) (after the change of variable \(l^{\prime}=l-1\)).
We observe that in the case where all the initial conditions are equal to null functions, except the first one, result (3.21) simplifies into
\[F(t,x)=\sum_{h=1}^{N}E_{\nu,1}\big{(}\eta_{h}t^{\nu}\big{)}f_{0}(x)\sum_{k=1}^ {N}\frac{\lambda_{k}\,\eta_{h}^{k-1}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}. \tag{3.28}\]
**Remark 3.4** (Integer derivatives).: From Theorem 3.3, by setting \(\nu=1\) in (3.20), we obtain the general solution to the integer order differential Cauchy problem. In particular, under the condition (3.19), we can write
\[F(t,x)=\sum_{h=1}^{N}e^{\eta_{h}t}\sum_{l=0}^{N-1}f_{l}(x)t^{l}\sum_{k=l+1}^{ N}\frac{\lambda_{k}\,\eta_{h}^{k-1-l}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}. \tag{3.29}\]
Note that in this case \(k_{l}=l+1\)\(\forall\)\(l\). Furthermore, for \(l\geq 1\), we can write
\[E_{1,l+1}(x)=\frac{1}{x^{l}}\Bigg{(}e^{x}-\sum_{i=0}^{l-1}\frac{x^{i}}{i!} \Bigg{)}. \tag{3.30}\]
In light of (3.30), formula (3.21) can be written as
\[F(t,x)=\sum_{h=1}^{N}\sum_{l=0}^{N-1}\Bigg{(}e^{\eta_{h}t^{\nu} }-\sum_{i=0}^{l-1}\frac{(\eta_{h}t)^{i}}{i!}\Bigg{)}\frac{f_{l}(x)t^{l}}{\eta_ {h}^{l}}\sum_{k=k_{l}}^{N}\frac{\lambda_{k}\,\eta_{h}^{k-1}}{\prod_{ \begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}\] \[=\sum_{h=1}^{N}e^{\eta_{h}t^{\nu}}\sum_{l=0}^{N-1}f_{l}(x)t^{l} \sum_{k=l+1}^{N}\frac{\lambda_{k}\,\eta_{h}^{k-l-1}}{\prod_{ \begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}-\sum_{l=0}^{N-1}f_{l}(x)\sum_{i=0 }^{l-1}\frac{t^{i+l}}{i!}\sum_{k=l+1}^{N}\lambda_{k}\sum_{h=1}^{N}\frac{\eta_{ h}^{k-l-1+i}}{\prod_{\begin{subarray}{c}j=1\\ j\neq h\end{subarray}}^{N}(\eta_{h}-\eta_{j})}\]
and the last term is equal to \(0\) because the last sum is always null thanks to formula (2.6) (in fact, \(k-l-1+i\leq k-l-1+(l-1)\leq k-2\leq N-2\) ).
Finally, we observe that in the case of null initial conditions, except the first one, formula (3.29) coincides with the solution (3.28) (where \(\nu>0\)) with the exponential function replacing the Mittag-Leffler function.
**Remark 3.5**.: We point out that the result in Theorem 3.2 can be directly proved also from formula (3.21). In particular, the case with \(\nu/n=1/n\) follows by suitably applying the following representation of the Mittag-Leffler function, with \(h\in\mathbb{N}\),
\[E_{1/n,h}(x) =\sqrt{\frac{n}{(2\pi)^{n-1}}}\frac{1}{x^{n(h-1)}}\int_{0}^{ \infty}\cdots\int_{0}^{\infty}\Biggl{(}\prod_{j=1}^{n-1}e^{-y_{j}}y_{j}^{j/n-1 }\,\mathrm{d}y_{j}\Biggr{)}\Biggl{(}e^{nx\bigl{(}\prod_{j=1}^{n-1}y_{j}\bigr{)} ^{1/n}}\] \[\quad-\sum_{i=0}^{n(h-1)-1}\Bigl{(}nx\prod_{j=1}^{n-1}y_{j}^{1/n} \Bigr{)}^{i}\frac{1}{i!}\Biggr{)},\]
which in the case of \(n=2\), after the change of variable \(y_{1}=y^{2}\), can be written as
\[E_{1/2,h}(x)=\frac{2x^{2(1-h)}}{\sqrt{\pi}}\int_{0}^{\infty}e^{-y^{2}}\Biggl{(} e^{2xy}-\sum_{i=0}^{2h-3}\frac{(2xy)^{i}}{i!}\Biggr{)}\,\mathrm{d}y.\]
The above formulas can be derived as formula (2.9) of [1].
## 4 Application to random motions with finite velocity
Let \(\bigl{(}\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},P\bigr{)}\) be a filtered probability space and \(d\in\mathbb{N}\). In the following we assume that every random object is suitably defined on the above probability space (i.e. if we introduce a stochastic process, this is adapted to the given filtration).
Let \(N\) be a homogeneous Poisson process with rate \(\lambda>0\) and consider the vectors \(v_{0},\ldots,v_{M}\in\mathbb{R}^{d}\). Let \(V\) be a stochastic process taking values in \(\{v_{0},\ldots,v_{M}\}\)\(a.s.\) and such that, for \(t\geq 0\),
\[p_{k}=P\{V(0)=v_{k}\},\ \ P\{V(t+\mathrm{d}t)=v_{k}\,|\,V(t)=v_{h},\,N(t,t+ \mathrm{d}t]=1\}=p_{hk},\ \ \ h,k=0,\ldots,M.\]
We say that \(V\) is the process describing the velocity of the associated random motion \(X\) defined as
\[X(t)=\int_{0}^{t}V(s)\,\mathrm{d}s=\sum_{i=0}^{N(t)-1}\bigl{(}T_{i+1}-T_{i} \bigr{)}V(T_{i})+\bigl{(}t-T_{N(t)}\bigr{)}V(T_{N(t)}),\ \ t\geq 0, \tag{4.1}\]
where \(T_{i}\) denotes the \(i\)-th arrival time of \(N\) and \(V(T_{i})\) denotes the random speed after the \(i\)-th event recorded by \(N\), therefore after the potential switch occurring at time \(T_{i}\). The stochastic process \(X\) describes the position of a particle moving in a \(d\)-dimensional (real) space with velocities \(v_{0},\ldots,v_{M}\) and which can change its current velocity only when the process \(N\) records a new event.
### Initial conditions for the characteristic function
Denote with \(\gamma^{(t)}_{hk}:[0,1]\longrightarrow\mathbb{R}^{d}\) the segment between \(v_{h}t\) and \(v_{k}t\), that is \(\gamma^{(t)}_{hk}(\delta)=v_{h}t\delta+v_{k}t(1-\delta),\ \delta\in[0,1]\). Now, it is easy to see that the distribution of the position \(X(t)\), conditionally on the occurrence of one Poisson event in \([0,t]\) and the two different velocities taken at time \(0\) (say \(v_{h}\)) and \(t\) (say \(v_{k}\)), is uniformly distributed on the segment between the two velocities at time \(t\) (that is \(\gamma^{(t)}_{hk}\)). In formulas, for \(h\neq k=0,\ldots,M\),
\[P\{X(t)\in\mathrm{d}x\,|\,V(0)=v_{h},V(t)=v_{k},N[0,t]=1\}=\frac{\mathrm{d}x} {||v_{h}-v_{k}||t},\ \ \text{with}\ x\in\gamma^{(t)}_{hk}. \tag{4.2}\]
Then, for \(t\geq 0\) in the neighborhood of \(0\) we observe that there can occur at maximum one Poisson event and therefore not more than one change of velocity for the motion \(X\). Thus, we can write the Fourier transform of the distribution of \(X(t)\), for \(\alpha\in\mathbb{R}^{d}\) (with \(<\cdot,\cdot>\) denoting the dot product in \(\mathbb{R}^{d}\)),
\[\mathbb{E}e^{i<\alpha,X(t)>} =\big{(}1-\lambda t\big{)}\sum_{k=0}^{M}p_{k}\,e^{i<\alpha,v_{k} t>}+\lambda t\sum_{k=0}^{M}p_{k}\,p_{kk}e^{i<\alpha,v_{k}t>}\] \[\quad+\lambda t\sum_{\begin{subarray}{c}h,k=0\\ h\neq k\end{subarray}}^{M}p_{h}\,p_{hk}\int_{\gamma^{(t)}_{hk}}\frac{e^{i< \alpha,x>}}{||v_{h}-v_{k}||t}\,\mathrm{d}x\] \[=\big{(}1-\lambda t\big{)}\sum_{k=0}^{M}p_{k}e^{it<\alpha,v_{k}> }+\lambda t\sum_{h,k=0}^{M}p_{h}\,p_{hk}\int_{0}^{1}e^{it<\alpha,\,v_{h} \delta+v_{k}(1-\delta)>}\,\mathrm{d}\delta. \tag{4.3}\]
By means of (4.3) we easily derive the values of the derivatives of the Fourier transform of the distribution of the position \(X(t)\) in the neighborhood of \(0\) and therefore also in \(t=0\), which will be used as initial conditions for the Cauchy problem. We point out that function (4.3) is based on the first order approximation of the probability mass of the Poisson process in the neighborhood of \(t=0\). However, this approximation is sufficient to provide the characteristic function in the neighborhood of \(0\); in fact, the probability law of random motions with finite velocities are derived by requiring only the knowledge at the first order, therefore we do not need a further expansion to obtain the higher order derivatives (in \(t=0\)).
In detail we obtain, with \(n\in\mathbb{N}_{0},\ \alpha\in\mathbb{R}^{d}\), for \(t\) sufficiently close to \(0\),
\[\frac{\partial^{n}}{\partial t^{n}}\mathbb{E}e^{i<\alpha,X(t)>}\] \[=\sum_{k=0}^{M}p_{k}e^{it<\alpha,v_{k}>}\Big{[}-n\lambda+(1- \lambda t)i<\alpha,v_{k}>\Big{]}\big{(}i<\alpha,v_{k}>\big{)}^{n-1}\] \[\quad+\lambda\sum_{h,k=0}^{M}p_{h}\,p_{hk}\int_{0}^{1}e^{it< \alpha,\,v_{h}\delta+v_{k}(1-\delta)>}\Big{[}n+it<\alpha,v_{h}\delta+v_{k}(1- \delta)>\Big{]}\big{(}i<\alpha,\,v_{h}\delta+v_{k}(1-\delta)>\big{)}^{n-1}\, \mathrm{d}\delta, \tag{4.4}\]
which in \(t=0\) simplifies into
\[\frac{\partial^{n}}{\partial t^{n}}\mathbb{E}e^{i<\alpha,X(t)>}\Big{|}_{t=0}= \sum_{k=0}^{M}p_{k}\Big{[}-n\lambda+i<\alpha,v_{k}>\Big{]}\big{(}i<\alpha,v_{k}> \big{)}^{n-1}\]
\[+n\lambda\sum_{h,k=0}^{M}p_{h}\,p_{hk}\int_{0}^{1}\bigl{(}i<\alpha,\,v_{h} \delta+v_{k}(1-\delta)>\bigr{)}^{n-1}\,\mathrm{d}\delta. \tag{4.5}\]
For derivatives of order \(0,1,2\) we can write, with \(\alpha\in\mathbb{R}^{d}\),
\[\mathbb{E}e^{i\,<\alpha,X(0)>}=1, \tag{4.6}\] \[\frac{\partial}{\partial t}\mathbb{E}e^{i\,<\alpha,X(t)>}\Big{|}_ {t=0}\ =i<\alpha,\sum_{k=0}^{M}p_{k}v_{k}>,\] (4.7) \[\frac{\partial^{2}}{\partial t^{2}}\mathbb{E}e^{i\,<\alpha,X(t)> }\Big{|}_{t=0}\] \[\qquad\qquad\qquad=-2\lambda i<\alpha,\sum_{k=0}^{M}p_{k}v_{k}>- \sum_{k=0}^{M}p_{k}<\alpha,v_{k}>^{2}+\lambda i<\alpha,\sum_{h,k=0}^{M}p_{h}p_ {hk}(v_{h}+v_{k})>. \tag{4.8}\]
Formula (4.6) is due to the fact that the particle performing the random motion is always assumed to be in the origin of \(\mathbb{R}^{d}\) at time \(t=0\). It is interesting to observe that the first derivative, given in (4.7), is equal to \(0\) for all \(\alpha\in\mathbb{R}^{d}\) if and only if \(\sum_{k=0}^{M}p_{k}v_{k}=0\).
**Example 4.1** (Orthogonal planar random motion).: We consider a random motion \((X,Y)\) governed by a homogeneous Poisson process \(N\) with rate \(\lambda>0\), moving in the plane with the following orthogonal velocities,
\[v_{k}=\biggl{(}c\cos\Bigl{(}\frac{k\pi}{2}\Bigr{)},c\sin\Bigl{(}\frac{k\pi}{2 }\Bigr{)}\biggr{)},\ \ c>0\ \text{with}\ k=0,1,2,3, \tag{4.9}\]
and such that from velocity \(v_{k}\) the particle can uniformly switch either to \(v_{k-1}\) or \(v_{k+1}\), that is \(P\{V(T_{n+1})=v_{k+1}\,|\,V(T_{n})=v_{k}\}=P\{V(T_{n+1})=v_{k-1}\,|\,V(T_{n})= v_{k}\}=1/2,\ k=0,1,2,3\). Therefore, the particle whose motion is described by \((X,Y)\) lies in the square \(S_{ct}=\{(x,y)\in\mathbb{R}^{2}\,:\,|x|+|y|\leq ct\}\) at time \(t>0\) and at each Poisson event take a direction orthogonal to the current one (see Figure 1). We refer to [4] (and references herein) for further details on planar orthogonal random motions and [5] for its three-dimensional version.
The probability distribution \(p(x,y)\,\mathrm{d}x\,\mathrm{d}y=P\{X(t)\in\mathrm{d}x,Y(t)\in\mathrm{d}y\},\ t \geq 0,\ x,y\in Q_{ct}\), of the position of the motion \((X,Y)\) satisfies the fourth-order differential equation
\[\Bigl{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t }+\lambda^{2}\Bigr{)}\biggl{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda \frac{\partial}{\partial t}-c^{2}\Bigl{(}\frac{\partial^{2}}{\partial x^{2}} +\frac{\partial^{2}}{\partial y^{2}}\Bigr{)}\biggr{)}p+c^{4}\frac{\partial^{4} p}{\partial x^{2}\partial y^{2}}=0, \tag{4.10}\]
and it is known that the current position \(\bigl{(}X(t),Y(t)\bigr{)}\) can be represented as a linear combination of two independent telegraph processes, In details, for \(t\geq 0\),
\[\begin{cases}X(t)=U(t)+V(t),\\ Y(t)=U(t)-V(t),\end{cases} \tag{4.11}\]
where \(U=\{U(t)\}_{t\geq 0}\) and \(V=\{V(t)\}_{t\geq 0}\) are independent one-dimensional telegraph processes moving with velocities \(\pm c/2\) and with rate \(\lambda/2\) (note that a similar results holds in the case of a non-homogeneous Poisson process as well, see [4]).
The Fourier transforms of the equation (4.10) has the form
\[\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{\partial}{\partial t}+ \lambda^{2}\Big{)}\Big{(}\frac{\partial^{2}}{\partial t^{2}}+2\lambda\frac{ \partial}{\partial t}+c^{2}(\alpha^{2}+\beta^{2})\Big{)}F+c^{4}\alpha^{2}\beta ^{2}F=0, \tag{4.12}\]
and by means of formulas (4.6), (4.7), (4.8) and (4.5) the initial conditions are
\[F(0,\alpha,\beta)=1,\ \ F_{t}(0,\alpha,\beta)=0,\ \ F_{tt}(0,\alpha,\beta)=- \frac{c^{2}}{2}\big{(}\alpha^{2}+\beta^{2}\big{)},\ \ F_{ttt}(0,\alpha,\beta)=\frac{\lambda c^{2}}{2}\big{(} \alpha^{2}+\beta^{2}\big{)}. \tag{4.13}\]
Now, the fractional version of equation (4.12), written in the form (3.20), with \(\nu>0\), is
\[\frac{\partial^{4\nu}F}{\partial t^{4\nu}}+4\lambda\frac{\partial ^{3\nu}F}{\partial t^{3\nu}}+\Big{(}5\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2}) \Big{)}\frac{\partial^{2\nu}F}{\partial t^{2\nu}} +2\lambda\Big{(}\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2})\Big{)} \frac{\partial^{\nu}F}{\partial t^{\nu}}\] \[+c^{2}\Big{(}\lambda^{2}(\alpha^{2}+\beta^{2})+c^{2}\alpha^{2} \beta^{2}\Big{)}F =0. \tag{4.14}\]
Let \(A=\sqrt{\lambda^{2}-c^{2}(\alpha^{2}-\beta^{2})}\) and \(B=\sqrt{\lambda^{2}-c^{2}(\alpha^{2}+\beta^{2})}\). Note that \(c^{2}(\alpha^{2}+\beta^{2})=\lambda^{2}-\big{(}A^{2}+B^{2}\big{)}/2\). Then, the following equality holds
\[x^{4} +4\lambda x^{3}+\Big{(}5\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2}) \Big{)}x^{2}+2\lambda\Big{(}\lambda^{2}+c^{2}(\alpha^{2}+\beta^{2})\Big{)}x+c ^{2}\Big{(}\lambda^{2}(\alpha^{2}+\beta^{2})+c^{2}\alpha^{2}\beta^{2}\Big{)}x\] \[=\prod_{k=1}^{4}(x-\eta_{k}),\]
with
\[\eta_{1}=-\lambda-\frac{A+B}{2},\ \eta_{2}=-\lambda+\frac{A-B}{2},\ \eta_{2}=- \lambda-\frac{A-B}{2},\ \eta_{4}=-\lambda+\frac{A+B}{2}. \tag{4.15}\]
With this at hand, by means of Theorem 3.3 it is easy to calculate the solution to a fractional Cauchy problem associated with equation (4.14).
For instance, in the case of initial conditions \(F(0,\alpha,\beta)=1\) and \(\frac{\partial^{l}F}{\partial t^{l}}\big{|}_{t=0}=0\) for all \(l\) (whose values depend on \(\nu\)), the solution reads
\[F_{\nu}(t,\alpha,\beta)\] \[\quad=\Bigg{(}\lambda^{2}-\Big{(}\frac{A-B}{2}\Big{)}^{2}\Bigg{)} \Big{(}\lambda-\frac{A+B}{2}\Big{)}E_{\nu,1}(\eta_{1}t^{\nu})+\Bigg{(}\lambda^ {2}-\Big{(}\frac{A+B}{2}\Big{)}^{2}\Bigg{)}\Big{(}\lambda+\frac{A-B}{2}\Big{)} E_{\nu,1}(\eta_{2}t^{\nu})\] \[\quad\quad+\Bigg{(}\lambda^{2}-\Big{(}\frac{A+B}{2}\Big{)}^{2} \Bigg{)}\Big{(}\lambda-\frac{A-B}{2}\Big{)}E_{\nu,1}(\eta_{3}t^{\nu})+\Bigg{(} \lambda^{2}-\Big{(}\frac{A-B}{2}\Big{)}^{2}\Bigg{)}\Big{(}\lambda+\frac{A+B}{ 2}\Big{)}E_{\nu,1}(\eta_{4}t^{\nu})\]
with \(\eta_{i}\) given in (4.15).
In the case of initial conditions given by (4.13) and \(3/4<\nu\leq 1\) (so all the conditions are required), the solution reads
\[F_{\nu}(t,\alpha,\beta) =\frac{1}{4}\Bigg{[}\Big{(}1-\frac{\lambda}{A}\Big{)}\Big{(}1- \frac{\lambda}{B}\Big{)}E_{\nu,1}(\eta_{1}t^{\nu})+\Big{(}1+\frac{\lambda}{A} \Big{)}\Big{(}1-\frac{\lambda}{B}\Big{)}E_{\nu,1}(\eta_{2}t^{\nu})\] \[\quad\quad+\Big{(}1-\frac{\lambda}{A}\Big{)}\Big{(}1+\frac{ \lambda}{B}\Big{)}E_{\nu,1}(\eta_{3}t^{\nu})+\Big{(}1+\frac{\lambda}{A}\Big{)} \Big{(}1+\frac{\lambda}{B}\Big{)}E_{\nu,1}(\eta_{4}t^{\nu})\Bigg{]}.\]
Note that for \(\nu=1\) this is the Fourier transform of the probability law of the orthogonal planar motion \((X,Y)\). This particular case can be also shown by considering the representation (4.11) in terms of independent one-dimensional telegraph processes and their well-known Fourier transform (see for instance [17] formula (2.16)).
**Example 4.2** (Planar motion with three directions).: Let us consider a planar random motion \((X,Y)\) governed by a homogeneous Poisson process with rate \(\lambda>0\) and moving with velocities
\[v_{0}=(c,0),v_{1}=(-c/2,\sqrt{3}c/2),v_{2}=(-c/2,-\sqrt{3}c/2),\quad\text{ with }c>0. \tag{4.16}\]
Let us assume that the particle starts moving with a uniformly chosen velocity among the three possible choices in (4.16) and at each Poisson event it uniformly selects the next one (including also the current one). This kind of motion are sometimes called as _complete minimal planar random motion_, see [6, 13] for further details.
The support of the position of the stochastic dynamics at time \(t\geq 0\) is the triangle \(T_{ct}=\{(x,y)\in\mathbb{R}^{2}\,:\,-ct/2\leq x\leq ct,(x-ct)/\sqrt{3}\leq y \leq(ct-x)/\sqrt{3}\}\) (see Figure 2). It is known that the probability distribution \(p(x,y)\,\mathrm{d}x\,\mathrm{d}y=P\{X(t)\in\mathrm{d}x,Y(t)\in\mathrm{d}y\}\) of the position of the motion \((X,Y)\) satisfies the third-order differential equation
\[\Big{(}\frac{\partial}{\partial t}+\frac{3\lambda}{2}\Big{)}^{3}p-\frac{27 \lambda^{3}}{8}+\frac{27\lambda^{2}}{16}\frac{\partial p}{\partial t}-\frac{3 }{4}c^{2}\Delta\Big{(}\frac{\partial}{\partial t}+\frac{3\lambda}{2}\Big{)}p- \frac{3}{4}c^{3}\frac{\partial^{3}p}{\partial x\partial y^{2}}+\frac{c^{3}}{ 4}\frac{\partial^{3}p}{\partial x^{3}}=0, \tag{4.17}\]
where \(\Delta\) denotes the Laplacian operator. Note that equation (4.17) can be derived by considering formula (1.9) of [13] and putting \(3/2\lambda\) instead of \(\lambda\) (this is sufficient by virtue of Remark 3.4 of [4]).
The initial conditions of the Cauchy problem related to equation (4.17) follow by suitably applying formulas (4.6), (4.7) and (4.8). In particular, for the Fourier transform of \(p\), \(F(t,\alpha,\beta)=\int_{\mathbb{R}^{2}}e^{i(\alpha x+\beta y)}p(t,x,y)\, \mathrm{d}x\,\mathrm{d}y\), we derive
\[F(0,\alpha,\beta)=1,\ \ F_{t}(0,\alpha,\beta)=0,\ \ F_{tt}(0,\alpha,\beta)=-\frac{c^{2}}{2} \big{(}\alpha^{2}+\beta^{2}\big{)}, \tag{4.18}\]
obviously the first two conditions imply that \(p(0,x,y)=\delta(x,y)\), with \(\delta\) denoting the Dirac delta function centered in the origin, and \(p_{t}(0,x,y)=0\ \forall\ x,y\). We refer to [13] for further details about this motion, such as the explicit form of \(p\) (see also [5, 7]).
Now, thanks to Theorem 3.2 we can easily give a probabilistic interpretation of the time-fractional version of order \(\nu=1/n,\ m\in\mathbb{N}\) of equation (4.17), subject to the same initial conditions. Note that for \(0<\nu\leq 1/3\) only the first condition is needed, for \(1/3<\nu\leq 1/2\) the first two conditions are required and for \(2/3<\nu\leq 1\) all three conditions are necessary. In details, the fractional Cauchy problem after the Fourier transformation is given by
\[\frac{\partial^{3}F_{\nu}}{\partial t^{3}}+\frac{9\lambda}{2}\frac{\partial^{ 2}F_{\nu}}{\partial t^{2}}+\Bigg{(}\Big{(}\frac{3}{2}\Big{)}^{4}\lambda^{2}+ \frac{3c^{2}(\alpha^{2}+\beta^{2})}{4}\Bigg{)}\frac{\partial F_{\nu}}{ \partial t}+\Big{(}\frac{9\lambda c^{2}(\alpha^{2}+\beta^{2})}{8}+\frac{3ic^ {3}\alpha\beta^{2}}{4}-\frac{ic^{3}\alpha^{3}}{4}\Big{)}F_{\nu}=0 \tag{4.19}\]
subject to the initial conditions in (4.18). By means of Theorem 3.2 we have that the Fourier transform \(F_{1/n}\) satisfying (4.19) can be expressed as \(F_{1/n}(t,x)=\mathbb{E}\,F\Bigg{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t),\,x\Bigg{)}\), where \(F\) denotes the solution to the Fourier-transformed problem of equation (4.17). Thus, the time-fractional version of (4.17), with \(\nu=1/n\) for natural \(n\), describes the probability law of a stochastic process \((X_{\nu},Y_{\nu})\) that is a time-changed planar motion with velocities (4.16),
\[\big{(}X_{\nu}(t),Y_{\nu}(t)\big{)}\stackrel{{ d}}{{=}}\Bigg{(}X \Big{(}\prod_{j=1}^{n-1}G_{j}^{(n)}(t)\Big{)},Y\Big{(}\prod_{j=1}^{n-1}G_{j}^ {(n)}(t)\Big{)}\Bigg{)},\quad t\geq 0.\]
**Declarations**
**Ethical Approval.** This declaration is not applicable.
**Competing interests.** The authors have no competing interests to declare.
**Authors' contributions.** Both authors equally contributed in the preparation and the writing of the paper.
**Funding.** The authors received no funding.
**Availability of data and materials.** This declaration is not applicable
|
2306.17817 | Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipulation | 3D perceptual representations are well suited for robot manipulation as they
easily encode occlusions and simplify spatial reasoning. Many manipulation
tasks require high spatial precision in end-effector pose prediction, which
typically demands high-resolution 3D feature grids that are computationally
expensive to process. As a result, most manipulation policies operate directly
in 2D, foregoing 3D inductive biases. In this paper, we introduce Act3D, a
manipulation policy transformer that represents the robot's workspace using a
3D feature field with adaptive resolutions dependent on the task at hand. The
model lifts 2D pre-trained features to 3D using sensed depth, and attends to
them to compute features for sampled 3D points. It samples 3D point grids in a
coarse to fine manner, featurizes them using relative-position attention, and
selects where to focus the next round of point sampling. In this way, it
efficiently computes 3D action maps of high spatial resolution. Act3D sets a
new state-of-the-art in RL-Bench, an established manipulation benchmark, where
it achieves 10% absolute improvement over the previous SOTA 2D multi-view
policy on 74 RLBench tasks and 22% absolute improvement with 3x less compute
over the previous SOTA 3D policy. We quantify the importance of relative
spatial attention, large-scale vision-language pre-trained 2D backbones, and
weight tying across coarse-to-fine attentions in ablative experiments. Code and
videos are available on our project website: https://act3d.github.io/. | Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, Katerina Fragkiadaki | 2023-06-30T17:34:06Z | http://arxiv.org/abs/2306.17817v2 | # Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation
###### Abstract
3D perceptual representations are well suited for robot manipulation as they easily encode occlusions and simplify spatial reasoning. Many manipulation tasks require high spatial precision in end-effector pose prediction, typically demanding high-resolution 3D perceptual grids that are computationally expensive to process. As a result, most manipulation policies operate directly in 2D, foregoing 3D inductive biases. In this paper, we propose Act3D, a manipulation policy Transformer that casts 6-DoF keypose prediction as 3D detection with adaptive spatial computation. It takes as input 3D feature clouds unprojected from one or more camera views, iteratively samples 3D point grids in free space in a coarse-to-fine manner, featurizes them using relative spatial attention to the physical feature cloud, and selects the best feature point for end-effector pose prediction. Act3D sets a new state-of-the-art in RLbench, an established manipulation benchmark. Our model achieves 10% absolute improvement over the previous SOTA 2D multi-view policy on 74 RLbench tasks and 22% absolute improvement with 3x less compute over the previous SOTA 3D policy. In thorough ablations, we show the importance of relative spatial attention, large-scale vision-language pre-trained 2D backbones, and weight tying across coarse-to-fine attentions. Code and videos are available at our project site: [https://act3d.github.io/](https://act3d.github.io/).
Keywords:Learning from Demonstrations, Manipulation, Transformers
## 1 Introduction
Solutions to many robotic manipulation tasks can be represented as a sequence of 6-DoF robot end-effector poses and gripper actions. Many recent methods train manipulation policies to predict such poses directly from 2D images using supervision from demonstrations [1, 2, 3, 4, 5, 6]. However, these methods are typically sample inefficient, often requiring thousands of trajectories, and cannot easily generalize across viewpoints and environments. Transporter networks [7] recently reformulated 4-DoF keypose prediction as pixel classification in a top-down scene image, inspired by object detection in computer vision [8, 9, 10]. This design choice of detecting end-effector poses in the scene using local features instead of regressing them from aggregated scene features, which we will call _action detection_, dramatically increased sample efficiency. However, Transporter Networks are limited to top-down 2D worlds and 4-DoF end-effector poses.
A core challenge in detecting actions for general 6-DoF manipulation in a 3D space is that end-effector 3D positions not only reside on points attached to the physical scene, but could also be in the free space. For example, end-effector 6 DoF poses relevant for a task, which we will call keyposes [11, 12], can be pre-grasp poses, back-off poses for articulated object interactions, or
transition poses between different parts of a task. While it is straightforward to featurize 2D pixels or 3D physical points -- we can featurize pixels with 2D backbones and back-project to 3D or use a 3D point cloud transformer [13] -- it is less clear how to efficiently featurize points in free space to detect one as the end-effector position. 3D voxelization at high resolution is computationally demanding [14] since we cannot use sparse 3D convolutions [15; 16]: we do not know ahead of time which voxels will remain empty or would need to be featurized because they would contain the next end-effector pose. Recent work of PerAct [1] featurizes _all_ 3D voxels (occupied or not) using the latent set bottlenecked self-attention operation of Perceiver [17], which is computationally expensive. Other methods work around this issue by avoiding featurizing points in free space and instead detecting a contact point and then regressing an offset from this contact point towards the end-effector predicted position [2; 18; 19]. This is a reasonable design choice but it does not fully exploit the action detection inductive bias.
In this paper, we propose Act3D, a Transformer policy architecture for language-conditioned multi-task robot manipulation that casts 6-DoF end-effector keypose prediction as 3D detection with adaptive spatial computation. Act3D learns 3D perceptual representations of arbitrary spatial resolution via recurrent coarse-to-fine 3D point grid sampling and featurization. It first computes a scene-level 3D feature cloud by lifting 2D pre-trained features from one or more views using sensed depth. At each iteration, the model then samples 3D point grids in the whole workspace and featurizes them using relative spatial cross-attention [20] to the 3D feature cloud. The featurized 3D points are classified with a detection Transformer head [10; 21] to predict the grid center for the next iteration. All iterations share attention weights. Act3D detects the 3D point that corresponds to the end-effector's 3D position, and then regresses the 3D rotation and opening of the end-effector from the contextualized parametric query. At inference time, we can trade-off compute for higher spatial precision and task performance by sampling more points in free space than the model ever saw at training time.
We test Act3D in RLBench [22], an established benchmark for learning diverse robot manipulation policies from demonstrations. We set a new state-of-the-art in the benchmark in both single-task and multi-task settings. Specifically, we achieve a 10% absolute improvement over prior SOTA on the single-task setting introduced by HiveFormer [2] with 74 tasks and a 22% absolute improvement over prior SOTA in the multi-task setting introduced by PerAct [1] with 18 tasks and 249 variations. We also validate our approach on a Franka Panda with a multi-task agent trained from scratch on 8 real-world tasks with a total of just 100 demonstrations (see Figure 2). In thorough ablations,
Figure 1: **Act3D architecture.** Act3D is a language-conditioned end-effector 6-DoF keypose predictor that learns 3D perceptual representations of arbitrary spatial resolution via recurrent coarse-to-fine 3D point sampling and featurization. Act3D featurizes multi-view RGB images with a pre-trained 2D backbone and lifts them in 3D using depth to obtain a multi-scale 3D scene feature cloud. It then iteratively predicts 3D foci of attention in the free space, samples 3D point grids in their vicinity, and featurizes the sampled 3D points using relative cross-attention to the physical scene feature cloud, language tokens, and proprioception. Act3D detects the 3D point that corresponds to the next best end-effector position using a detection Transformer head, and regresses the rotation, end-effector opening, and planner collision avoidance from the decoder’s parametric query.
we show the importance of relative spatial attention, large-scale vision-language pre-trained 2D backbones, and weight tying across coarse-to-fine attentions.
## 2 Related Work
Learning robot manipulation from demonstrationsMany recent work train multi-task manipulation policies that leverage Transformer architectures [1; 2; 3; 5; 23; 24] to predict robot actions from video input and language instructions. End-to-end image-to-action policy models, such as RT-1 [5], GATO [24], BC-Z [25], and InstructRL [3], directly predict 6-DoF end-effector poses from 2D video and language inputs. They require many thousands of demonstrations to learn spatial reasoning and generalize to new scene arrangements and environments. Transporter networks [7] and their subsequent variants [26; 27; 28] formulate 4-DoF end-effector pose prediction as pixel classification in 2D overhead images. Their _action detection_ inductive bias -- parametrizing the action implicitly [29] by detecting end-effector poses in the scene using local features with translation and rotation equivariances [30] -- dramatically increased sample efficiency over previous methods that regress end-effector poses by aggregating global scene features. However, they are limited to top-down 2D planar worlds with simple pick-and-place primitives. 3D policy models of C2F-ARM [4] and PerAct [1] voxelize the robot's workspace and are trained to detect the 3D voxel that contains the next end-effector keypose. Spatially precise 3D pose prediction requires the 3D voxel grid to be high resolution, which comes at a high computational cost. C2F-ARM [4] uses a coarse-to-fine voxelization in convolutional grids to handle computational complexity, while PerAct [1] uses Perceiver's latent bottleneck [17] to avoid voxel-to-voxel self-attention operations. Act3D avoids 3D voxelization altogether and instead represents the scene as a 3D feature cloud. It samples 3D points in the empty workspace and featurizes them using cross-attentions to the physical 3D point features.
Feature pre-training for robot manipulationMany 2D policy architectures bootstrap learning from demonstrations from frozen or finetuned 2D image backbones [31; 32; 25; 33] to increase experience data sample efficiency. Pretrained vision-language backbones can enable generalization to new instructions, objects, and scenes [34; 27]. In contrast, SOTA 3D policy models are typically trained from scratch from colored point clouds input [1; 4; 35]. Act3D uses CLIP pre-trained 2D backbones [36] to featurize 2D image views and lifts the 2D features in 3D using depth information [37; 38]. We show that 2D feature pretraining gives a considerable performance boost over training from scratch.
Relative attention layersRelative attentions have shown improved performance in many 2D visual understanding tasks and language tasks [39; 40]. Rotary embeddings [41] implement relative attention efficiently by casting it as an inner-product in an extended position feature space. In 3D, relative attention is imperative as the coordinate system is arbitrary. 3D relative attentions have been used before in 3D Transformer architectures for object detection and point labelling [42; 43]. We show in Section 4 that relative attentions significantly boost performance of our model.
## 3 Act3D
The architecture of Act3D is shown in Figure 1. It is a Transformer policy that, at each timestep \(t\), predicts a 6-DoF end-effector pose from one or more RGB-D images, a language instruction, and proprioception information regarding the robot's current end-effector pose. The key idea is to _detect_ 6 DoF end-effector poses in the robot's workspace by learning 3D perceptual representations of free space with arbitrary spatial resolution, via recurrent coarse-to-fine 3D point grid sampling and featurization. 3D point candidates (which we will call ghost points) are sampled, featurized and scored iteratively through relative cross-attention [20] to the physical 3D scene feature cloud, lifted from 2D feature maps of the input image views.
Following prior work [12; 1; 2; 3], instead of predicting an end-effector pose at each timestep, we extract a set of _keyposes_ that capture bottleneck end-effector poses in a demonstration. A pose is a keypose if (1) the end-effector changes state (something is grasped or released) or (2) velocities
approach near zero (a common occurrence when entering pre-grasp poses or entering a new phase of a task). The prediction problem then boils down to predicting the next (best) keypose action given the current observation. At inference time, Act3D iteratively predicts the next best keypose and reaches it with a motion planner, following previous works [1, 2]. We assume access to a dataset of \(n\) demonstration trajectories. Each demonstration is a sequence of observations \(O=\{o_{1},o_{2},..,o_{t}\}\) paired with continuous actions \(A=\{a_{1},a_{2},..,a_{t}\}\) and, optionally, a language instruction \(l\) that describes the task. Each observation \(o_{t}\) consists of RGB-D images from one or more camera views; more details are in Appendix 6.2. An action \(a_{t}\) consists of the 3D position and 3D orientation (represented as a quaternion) of the robot's end-effector, its binary open or closed state, and whether the motion planner needs to avoid collisions to reach the pose: \[a=\{a_{\mathrm{pos}}\in\mathbb{R}^{3},a_{\mathrm{rot}}\in\mathbb{H},a_{\mathrm{ open}}\in\{0,1\},a_{\mathrm{col}}\in\{0,1\}\}\] Next, we go into details on the modules of Act3D. Visual and language encoderOur visual encoder map multi-view RGB-D images into a multi-scale 3D scene feature cloud. We use a large-scale pre-trained 2D feature extractor followed by a feature pyramid network [44] to extract multi-scale visual tokens for each camera view. We then lift these 2D visual tokens to 3D by interpolating their depth values. The language encoder featurizes instructions with a large-scale pre-trained language feature extractor. We use the CLIP ResNet50 [36] visual encoder and the corresponding language encoder to exploit their common vision-language feature space for interpreting instructions and referential grounding. Our pre-trained visual and language encoders are frozen, not finetuned, during training of Act3D. Iterative ghost point sampling and featurizationTo enable precise and computationally tractable keypose detection, we sample, featurize and select ghost points iteratively, first coarsely across the entire workspace, then finely in the vicinity of the ghost point selected as the focus of attention in the previous iteration. The coarsest ghost points attend to a global coarse scene feature cloud, whereas finer ghost points attend to a local fine scene feature cloud. Relative 3D cross-attentionsWe featurize each of the 3D ghost points and a parametric query (used to select via inner-product one of the ghost points as the next best end-effector position in the decoder) independently through cross-attentions to the multi-scale 3D scene feature cloud, language tokens, and proprioception. Featurizing ghost points independently, without self-attentions to one another, enables sampling more ghost points at inference time to improve performance, as we show in Section 4. Our cross-attentions use relative 3D position information and are implemented efficiently with rotary positional embeddings [20]. Given a point \(\mathbf{p}=(x,y,z)\in\mathbb{R}^{3}\) and its feature \(\mathbf{x}\in\mathbb{R}^{d}\), the rotary position encoding function \(\mathbf{PE}\) is defined as: \[\mathbf{PE}(\mathbf{p},\mathbf{x})=\mathbf{M}(\mathbf{p})\mathbf{x}=\begin{bmatrix} \begin{smallmatrix}\cos a\theta_{k}&-\sin a\theta_{k}&0&0&0&0\\ \sin a\theta_{k}&\cos a\theta_{k}&0&0&0&0\\ 0&0&\cos a\theta_{k}&-\sin a\theta_{k}&0&0\\ 0&0&\sin a\theta_{k}&\cos a\theta_{k}&0&0\\ 0&0&0&\cos a\theta_{k}&-\sin a\theta_{k}\\ 0&0&0&0&\sin a\theta_{k}&\cos a\theta_{k}\\ \end{smallmatrix}\end{bmatrix}\] where \(\theta_{k}=\frac{1}{100000^{(k-1)/d}}\). The dot product of two positionally encoded features is then \[\mathbf{PE}(\mathbf{p}_{i},\mathbf{x}_{i})^{T}\mathbf{PE}(\mathbf{p}_{j}, \mathbf{x}_{j})=\mathbf{x}_{i}^{T}\mathbf{M}(\mathbf{p}_{i})^{T}\mathbf{M}( \mathbf{p}_{j})\mathbf{x}_{j}=\mathbf{x}_{i}^{T}\mathbf{M}(\mathbf{p}_{j}- \mathbf{p}_{i})\mathbf{x}_{j}\] which depends only on the relative positions of points \(\mathbf{p}_{i}\) and \(\mathbf{p}_{j}\). Detection Transformer decoderOnce ghost points and the parametric query are featurized, the detection transformer head scores ghost point tokens via inner product with the parametric query to select one as the next best end-effector position \(a_{\mathrm{pos}}\). We then regress the end-effector orientation \(a_{\mathrm{rot}}\) and opening \(a_{\mathrm{open}}\), as well as whether the motion planner needs to avoid collisions to reach the pose \(a_{\mathrm{col}}\), from the parametric query with a simple multi-layer perceptron (MLP). TrainingAct3D is trained supervised from input-action tuples from a dataset of manipulation demonstrations. These tuples are composed of RGB-D observations, language goals, and keypose
actions \(\{(o_{1},l_{1},k_{1}),(o_{2},l_{2},k_{2}),...\}\). During training, we randomly sample a tuple and supervise Act3D to predict the keypose action \(k\) given the observation and goal \((o,l)\). We supervise position prediction \(a_{\text{pos}}\) at every round of coarse-to-fine with a softmax cross-entropy loss over ghost points, rotation prediction \(a_{\text{rot}}\) with a MSE loss on the quaternion prediction, and binary end-effector opening \(a_{\text{open}}\) and whether the planner needs to avoid collisions \(a_{\text{col}}\) with binary cross-entropy losses.
Implementation detailsWe extract two feature maps per \(256\)x\(256\) input image view: \(32\)x\(32\) coarse visual tokens and \(64\)x\(64\) fine visual tokens. We use three ghost point sampling stages: first across the entire workspace (roughly \(1\) meter cube), then in a \(16\) centimeter diameter ball, and finally in a \(4\) centimeter diameter ball. The coarsest ghost points attend to a global coarse scene feature cloud (\(32\)x\(32\)x\(n_{\text{cam}}\) coarse visual tokens) whereas finer ghost points attend to a local fine scene feature cloud (the closest \(32\)x\(32\)x\(n_{\text{cam}}\) out of the total \(64\)x\(64\)x\(n_{\text{cam}}\) fine visual tokens). During training, we sample \(1000\) ghost points in total split equally across the three stages. At inference time, we can trade-off extra prediction precision and task performance for additional compute by sampling more ghost points than the model ever saw at training time (\(10,000\) in our experiments). We'll show in ablations in Section 4 that our framework is robust to these hyper-parameters but tying weights across sampling stages and relative 3D cross-attention are both crucial for generalization. We use \(2\) layers of cross-attention and an embedding size \(60\) for single-task experiments and \(120\) for multi-task experiments. Training samples are augmented with random crops of RGB-D images and \(\pm 45.0\) yaw rotation perturbations (only in the real world as this degrades performance in simulation as we'll show in Section 4). We use a batch size 16 on a Nvidia 32GB V100 GPU for 200k steps (one day) for single-task experiments, and a batch size 48 on 8 Nvidia 32GB V100 GPUs for 600K steps (5 days) for language-conditioned multi-task experiments.
## 4 Experiments
We test Act3D in learning from demonstrations single-task and multi-task manipulation policies in simulation and the real world. In the multi-task setting, task and goal conditioning are given as input through language instructions. We conduct our simulated experiments in RLBench [22], an established simulation benchmark for learning manipulation policies, for the sake of reproducibility and benchmarking. Our experiments aim to answer the following questions: **1.** How does Act3D compare against SOTA 2D multiview and 3D manipulation policies in single-task and multi-task settings? **2.** How does the test performance change with varying number of training demonstrations?
Figure 2: **Tasks. We conduct experiments on 92 simulated tasks in RLBench [22] (only 10 shown), and 8 real-world tasks (only 5 shown). Please see the supplementary video for video results of our model in simulation and in the real world.**
**3.** How does Act3D generalize across camera viewpoints in comparison to existing 2D multiview policies? **4.** How do design choices such as relative 3D attention, pre-trained 2D backbones, weighted attention layers, and the number of coarse-to-fine sampling stages impact performance?
### Evaluation in simulation
DatasetsWe test Act3D in RLbench in two settings to ensure a clear comparison with prior work: a single-task setting with 74 tasks proposed by HiveFormer [2] and a multi-task multi-variation setting with 18 tasks and 249 variations proposed by PerAct [1]; more details are in Appendix 6.3.
BaselinesWe compare Act3D with the following state-of-the-art manipulation policy learning methods: **1.** InstructRL [3], a 2D policy that directly predicts 6 DoF poses from image and language conditioning with a pre-trained vision-and-language backbone. **2.** PerAct [1], a 3D policy that voxelizes the workspace and detects the next best voxel action through global self-attention. **3.** HiveFormer [2] and Auto-\(\lambda\)[18], hybrid methods that detect a contact point within an image input, then regress an offset from this contact point. We report numbers from the papers when available.
Evaluation metricWe evaluate policies by task completion success rate, the proportion of execution trajectories that lead to goal conditions specified in language instructions.
Single-task manipulation resultsWe consider 74 tasks grouped into 9 categories, as proposed by HiveFormer [2]. Each method is trained with 100 demonstrations and evaluated on 500 unseen episodes. We show single-task quantitative results of our model and baselines in Figure 3. Act3D **reaches 83% success rate, an absolute improvement of 10% over InstructRL [3], prior SOTA in this setting**, and consistently outperforms it across all 9 categories of tasks. With only 10 demonstrations per task, Act3D is competitive with prior SOTA using 100 demonstrations per task.
Multi-task manipulation resultsWe consider 18 tasks with 249 variations, as proposed by PerAct [1]. Each task includes 2-60 variations, which test generalization to test goal configurations that involve novel object colors, shapes, sizes, and categories. This is a more challenging setup than
Figure 4: **Multi-task performance.** On 18 RLBench tasks with 249 variations, Act3D reaches 65% success rate, an absolute improvement of 22% over PerAct [1], prior SOTA in this setting.
Figure 3: **Single-task performance.** On 74 RLBench tasks across 9 categories, Act3D reaches 83% success rate, an absolute improvement of 10% over InstructRL [3], prior SOTA in this setting.
before, since the previous setting only tested generalization to novel arrangements of the same objects. Each method is trained with 100 demonstrations per task split across variations, and evaluated on 500 unseen episodes per task. We show multi-task quantitative results of our model and PerAct in Figure 3. Act3D reaches 65% success rate, an absolute improvement of 22% over PerAct, prior SOTA in this setting, consistently outperforming it across most tasks. **With only 10 demonstrations per task, Act3D outperforms PerAct using 100 demonstrations per task.** Note that Act3D also uses less than a third of PerAct's training computation budget: PerAct was trained for 16 days on 8 Nvidia V100 GPUs while we train for 5 days on the same hardware.
### Evaluation in real-world
In our real-world setup, we conduct experiments with a Franka Emika Panda robot and a single Azure Kinect RGB-D sensor; more details are in Appendix 6.1. We designed 8 tasks (Figure 2) involving interactions with multiple types of objects, spanning liquid, articulated objects, and deformable objects. For each task, we collected 10 to 15 human demonstrations and trained a language-conditioned multi-task model on all data. We report the success rate on 10 episodes per task in Table 1. Act3D can capture semantic knowledge in demonstration well and performs reasonably well on all tasks, even with a single camera input. One major failure case comes from noisy depth sensing: when the depth image is not accurate, the selected point results in imprecise action prediction. Leveraging multi-view input for error correction could improve this, and we leave this for future work. For more qualitative videos of the robot executing the tasks, see our project site.
### Ablations
We ablate the impact of our design choices in Table 2. We perform most ablations in the single-task setting on 5 tasks: pick cup, put knife on chopping board, put money in safe, slide block to target, take umbrella out of stand. We ablate the choice of pre-trained 2D backbone in the multi-task setting with all 18 tasks.
**Generalization across camera viewpoints:** We vary camera viewpoints at test time for both Act3D and HiveFormer [2]. The success rate drops to 20.4% for HiveFormer, a relative 77% drop, while Act3D achieves 74.2% success rate, a 24% relative drop. This shows detecting actions in 3D makes Act3D more robust to camera viewpoint changes than multiview 2D methods that regress offsets.
**Weight-tying and coarse-to-fine sampling:** All 3 stages of coarse-to-fine sampling are necessary: a model with only 2 stages of sampling and regressing an offset from the position selected at the second stage suffers a 4.5% performance drop. Tying weights across stages and relative 3D positional embeddings are both crucial; we observed severe overfitting without, reflected in respective 17.5% and 42.7% performance drops. Fine ghost point sampling stages should attend to local fine visual features with precise positions: all stages attending to global coarse features leads to a 8.3% performance drop. Act3D can effectively trade off inference computation for performance: sampling 10,000 ghost points, instead of the 1,000 the model was trained with, boosts performance by 4.9%.
**Pre-training 2D features:** We investigate the effect of the pre-trained 2D backbone in the multi-task setting where language instructions are most needed. A ResNet50 [36] backbone pre-trained with CLIP improves success rate by 8.7% over a ResNet50 backbone pre-trained on ImageNet, and by 16.9% over using raw RGB as the visual token features.
**Augmentations:** Random crops of RGB-D images boost success rate by 6.5%, but yaw rotation perturbations drop it by 11.9%. This is in line with PerAct [1] results in RLBench.
**Hyperparameter sensitivity:** Act3D is robust to hyperparameters. Doubling the diameter of ghost point sampling balls from (16 cm, 4 cm) to (32 cm, 8 cm) drops success rate by 1.5% and halving it
\begin{table}
\begin{tabular}{l c c} \hline \hline Task & \# Train & Success \\ \hline reach target & 10 & 10/10 \\ duck in oven & 15 & 6/10 \\ wipe coffee & 15 & 7/10 \\ fruits in bowl & 10 & 8/10 \\ stack cups & 15 & 6/10 \\ transfer beans & 15 & 5/10 \\ press handsan & 10 & 10/10 \\ uncrew cap & 10 & 8/10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Real-world tasks.
to (8 cm, 2 cm) by 6.9%. Halving the total number of ghost points sampled from 1,000 to 500 drops success rate by 2.3% whereas doubling it to 2,000 increases success rate by 0.3%. We use 1,000 ghost points in our experiments to allow training with a single GPU per task.
### Limitations and future work
Our framework currently has the following limitations: **1.** Act3D sometimes fails in very high-precision tasks, like screwing and insertions, requiring temporally fine-grain closed-loop control. **2.** Act3D doesn't handle manipulation of articulated object well, such as opening/closing doors, fridges, and ovens, which require a more precise trajectory than the one supplied by a motion planner that connects keyposes with collision-free straight lines. Learning-based trajectory prediction [45; 46] would help. **3.** Currently, for long horizon tasks our policy would need to predict all keyposes one by one. A hierarchical framework that would predict language subgoals for subtasks [47; 48; 49] and feed those to our action predictor would allow better re-usability of skills across tasks. All keypose prediction methods share the listed limitations. We leave these for future work.
## 5 Conclusion
We presented Act3D, a language-conditioned Transformer architecture that learns manipulation policies from demonstrations. From one or more posed RGB-D images and language instructions, it predicts 6-DoF robot end-effector keyposes by iteratively selecting and featurizing 3D point grids in the robot's workspace. Act3D sets a new state-of-the-art in RLBench, an established robot manipulation benchmark, and solves diverse manipulation tasks in the real world from a single RGB-D camera view and a handful of demonstrations. In thorough ablations, we showed the importance of relative 3D attentions, 2D feature pre-training, and weight tying during coarse-to-fine iterations. Extending Act3D to handle more precise tasks, and tasks involving trajectory-level action prediction remain as our future work.
\begin{table}
\begin{tabular}{l l c} \hline \hline & & Average success rate in \\ \multicolumn{2}{c}{Model} & single-task setting (5 tasks) \\ \hline \multirow{4}{*}{Core design choices} & Best Act3D model (evaluated in Fig. 3) & **98.1** \\ & Only 2 stages of coarse-to-fine sampling: & 93.6 \\ & full workspace, 16 cm ball, regress an offset & 80.6 \\ & No weight tying across stages & 55.4 \\ & Absolute 3D positional embeddings & 89.8 \\ & Attention to only global coarse visual features & 93.2 \\ \hline \multirow{2}{*}{Viewpoint changes} & Best Act3D model (evaluated in Fig. 3) & **74.2** \\ & HiveFormer & 20.4 \\ \hline \multirow{2}{*}{Augmentations} & No image augmentations & **91.6** \\ & With rotation augmentations & 86.2 \\ \hline \multirow{4}{*}{Hyperparameter sensitivity} & Double sampling ball diameters: 32 cm and 8 cm & 96.6 \\ & Halve sampling ball diameters: 8 cm and 2 cm & 91.2 \\ & 500 ghost points at training time & 95.8 \\ & 2000 ghost points at training time (need 2 GPUs) & **98.4** \\ \hline \hline \multicolumn{2}{c}{Multi-task setting (18 tasks)} \\ \hline \multirow{3}{*}{Backbone} & CLIP ResNet50 backbone & **65.1** \\ & ImageNet ResNet50 backbone & 53.4 \\ \cline{1-1} & No backbone (raw RGB) & 45.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablations.** |
2307.16820 | Shear localisation controls the dynamics of earthquakes | Earthquakes are produced by the propagation of rapid slip along tectonic
faults. The propagation dynamics is governed by a balance between elastic
stored energy in the surrounding rock, and dissipated energy at the propagating
tip of the slipping patch. Energy dissipation is dictated by the mechanical
behaviour of the fault, which is itself the result of feedbacks between
thermo-hydro-mechanical processes acting at the mm to sub-mm scale. Here, we
numerically simulate shear ruptures using a dual scale approach, allowing us to
couple a sub-mm description of inner fault processes and km-scale
elastodynamics, and show that the sudden localisation of shear strain within a
shear zone leads to the emergence of classical cracks driven by a constant
fracture energy. The fracture energy associated to strain localisation is
substantially smaller than that predicted assuming uniform shearing. We show
the existence of a unique scaling law between the localised shearing width and
the rupture speed. Our results indicate that earthquakes are likely to be
systematically associated to extreme strain localisation. | Fabian Barras, Nicolas Brantut | 2023-07-31T16:34:26Z | http://arxiv.org/abs/2307.16820v1 | # Shear localisation controls the dynamics of earthquakes
###### Abstract
Earthquakes are produced by the propagation of rapid slip along tectonic faults. The propagation dynamics is governed by a balance between elastic stored energy in the surrounding rock, and dissipated energy at the propagating tip of the slipping patch. Energy dissipation is dictated by the mechanical behaviour of the fault, which is itself the result of feedbacks between thermo-hydro-mechanical processes acting at the mm to sub-mm scale. Here, we numerically simulate shear ruptures using a dual scale approach, allowing us to couple a sub-mm description of inner fault processes and km-scale elastodynamics, and show that the sudden localisation of shear strain within a shear zone leads to the emergence of classical cracks driven by a constant fracture energy. The fracture energy associated to strain localisation is substantially smaller than that predicted assuming uniform shearing. We show the existence of a unique scaling law between the localised shearing width and the rupture speed. Our results indicate that earthquakes are likely to be systematically associated to extreme strain localisation.
## 1 Introduction
Earthquake sources correspond to slip events dynamically propagating along faults. At crustal scale, faults can be viewed as two-dimensional surfaces, across which the displacement field is discontinuous. However, geological and experimental observations show that "slip" across faults is the result of shear deformation across narrow layers of highly comminuted, transformed or partially melted rocks. In the shallow continental crust, fault core materials are often made of fine-grained silicastic and clay gouges, with a porosity filled with pressurised water (e.g. _Scholz_, 1988; _Rice_, 2006). The dynamics of ruptures in crustal faults is controlled by the rheology of these water-saturated fault gouges.
During earthquakes, faults slide at elevated slip rates of the order of metres per second, which leads to dramatic weakening of fault gouge materials (_Scholz_, 2019, chap. 2). In dry materials, weakening is most likely controlled by the local temperature rise arising from dissipation of frictional work, combined with thermally activated rheology of the rock-forming minerals (e.g _Rice_, 2006; _Beeler et al._, 2008; _Proctor et al._, 2014; _De Paola et al._, 2015; _Yao et al._, 2015; _Pozzi et al._, 2021; _Harbord et al._, 2021). In the presence of fluids, an additional weakening mechanism is expected, due to the differential thermal expansion of the pore fluids and the solid pore space: upon heating, the fluid pressure rises, effective normal stress decreases and the frictional strength
drops. This so-called "thermal pressurisation" mechanism, initially proposed by _Sibson_ (1975) as a temperature-limiting process in wet rocks, has been shown to produce realistic predictions for thermal evolution and energy dissipation during earthquakes (e.g. _Rice_, 2006; _Viesca and Garagash_, 2015), and is a potential candidate to explain some of the complexity observed in natural earthquakes (e.g. _Noda and Lapusta_, 2013) and the operation of plate boundary faults at low ambient stress (e.g. _Noda et al._, 2009; _Lambert et al._, 2021).
The thickness of the actively deforming zone determines the shear heating rate and how easily fluids and heat diffuse away from the fault plane, and thus has a tremendous influence on the resulting rupture dynamics (_Andrews_, 2002; _Noda et al._, 2009; _Noda and Lapusta_, 2010; _Viesca and Garagash_, 2015; _Lambert et al._, 2021). While geological and experimental observations can be used to constrain the thickness of actively deforming fault gouge material, the range of acceptable values spans more than 3 orders of magnitude, from fractions of millimetres to centimetres (_Rice_, 2006), and it is one of the key unknown that limits our ability to determine the efficiency of thermal weakening mechanisms in nature.
The influence of shear zone width on earthquake propagation is further complicated by the fact that this parameter is likely evolving during seismic slip: strain localisation is expected to be part of the fault weakening process. Several mechanisms might be responsible for strain localisation during earthquake slip, including granular rearrangements and grain size reduction (e.g., _Mair and Abe_, 2008; _Hermundstad et al._, 2010), shear heating coupled to thermal weakening (e.g., _Braeck and Podladchikov_, 2007), thermal pressurisation (e.g., _Sulem et al._, 2011; _Rice et al._, 2014; _Platt et al._, 2014) and thermal decomposition (e.g., _Veeakis et al._, 2012; _Platt et al._, 2015). In all cases, the strain localisation process is associated to a rapid reduction in shear strength, and we therefore expect strain localisation to exert a strong control on the overall dynamics of rupture.
Here, we demonstrate and quantify how strain localisation impacts rupture dynamics: we run dynamic rupture simulations and compute the fracture energy associated with the localisation process, and find a relationship between the rupture speed and the degree of strain localisation within the fault gouge. We use the case of thermal pressurisation as a representative thermal weakening process that is compatible with seismological observations (_Rice_, 2006; _Viesca and Garagash_, 2015; _Lambert et al._, 2021), and is known to spontaneously lead to strain localisation (_Rice et al._, 2014; _Platt et al._, 2014). We argue that the interplay between rupture dynamics and strain localisation analysed here applies to most thermal weakening processes in rocks and engineering materials.
## 2 Shear localisation and faulting
As a general starting point for our analysis, let us consider slip on geological fault as the deformation distributed across a narrow pre-existing shear zone (e.g., a fault gouge made of unconsolidated grains), and examine the conditions under which shear strain can be further localised within this pre-existing zone. Let us assume that the shear stress \(\tau\) across the shear zone is function of a set of variables that includes the shear strain rate \(\dot{\gamma}\) and another diffusive quantity \(\vartheta\):
\[\tau\sim f(\dot{\gamma},\vartheta), \tag{1}\]
and that the rate of work produced by shearing acts as a source term in the diffusion of \(\vartheta\):
\[\dot{\vartheta}=\beta\tau\dot{\gamma}+\alpha\nabla^{2}\vartheta, \tag{2}\]
where \(\nabla^{2}\) denotes the Laplace operator, \(\alpha\) is a diffusivity and \(\beta\) is analogous to the Taylor-Quiney coefficient (_Taylor and Quinney_, 1934) for the cases when \(\vartheta\) corresponds to temperature. In relation to the first condition, one can define
\[g^{\prime}(\dot{\gamma},\vartheta)=\frac{\partial f}{\partial\dot{\gamma}}, \tag{3}\]
which describes the rate-dependent rheology of the material. Natural examples include viscous creep of rocks at elevated temperature, granular material rheology (e.g. _Jop et al._, 2006) and rate-and-state friction. Similarly, one can define
\[h^{\prime}(\dot{\gamma},\vartheta)=\frac{\partial f}{\partial\vartheta}, \tag{4}\]
to describe the effect of \(\vartheta\) on the material rheology. In practice, this diffusive quantity \(\vartheta\) will often correspond to temperature and \(h^{\prime}\) describe thermal weakening effects. \(\vartheta\) could also correspond to fluid pressure in a porous material whose strength is reduced by an increase in pore fluid pressure (following the concept of effective stress discussed later in Equation 7). It can also account for the combined effect of pressure and temperature as in the case of thermal pressurisation that will be discussed later in this manuscript. If conditions (1) and (2) are met, a linear stability analysis (detailed in Appendix A) demonstrates that uniform shearing at a given time \(t=t_{0}\) becomes unstable if:
\[\frac{h^{\prime}_{0}}{g^{\prime}_{0}}<0, \tag{5}\]
Figure 1: Schematic of the dual scale setup governing the propagation of localised shear bands. The dynamic rupture extends over large (kilometric) scales along the fault (\(x-z\) plane), whereas the frictional strength is determined by solving a coupled diffusion problem, such as thermal pressurisation in this paper, where strain rate spontaneously evolves over submillimetre scales across the fault gouge (in the \(y\) direction).
with \(\{f_{0},g^{\prime}_{0},h^{\prime}_{0}\}=\{f,g^{\prime},h^{\prime}\}|_{t=t_{0}}\). Moreover, the analysis also shows that only perturbation wavelengths \(\lambda\) greater than a critical wavelength are unstable:
\[\lambda>2\pi\sqrt{-\frac{\alpha}{\beta f_{0}}\frac{g^{\prime}_{0}}{h^{\prime}_{ 0}}}\equiv\lambda_{\mathrm{c}}, \tag{6}\]
which indicates that such instability leads to the localisation of shear strain down to some thickness \(W_{\mathrm{loc}}\sim\lambda_{\mathrm{c}}/2\). Remarkably, this type of localisation instability can also arise within rate-strengthening materials (\(g^{\prime}_{0}>0\)) providing that \(h^{\prime}_{0}<0\), as it is often the case with thermal weakening mechanisms. As a result, shear flow concentrates over a thickness much smaller than the initial width of the shear zone and leads to a substantive drop of the associated shear stress. Examples of this strain localisation instability have been described by _Rice et al._ (2014); _Platt et al._ (2014) in the context of thermal pressurisation in crustal faults, as well as _Bai_ (1982) in the context of adiabatic shear banding in metals. In this work, we quantitatively investigate how strain localisation across the shear zone drives the rapid acceleration of slip and the propagation of rupture front along the fault plane during earthquakes.
## 3 Model
Here, we analyse the process of thermal pressurisation, which has been shown to be a realistic dynamic weakening mechanism (e.g. _Rice_, 2006; _Viesca and Garagash_, 2015; _Lambert et al._, 2021) and that undergoes the localisation instability outlined above (_Rice et al._, 2014; _Platt et al._, 2014).
In this case, the diffusive variable \(\vartheta\) corresponds to pore fluid pressure \(p\) that affects the effective normal stress (\(\sigma_{\mathrm{n}}-p\)) in the shear zone and, thereby, its shear strength together with a rate-dependent friction coefficient:
\[\tau(\dot{\gamma},p)=f_{\mathrm{rsf}}(\dot{\gamma})(\sigma_{\mathrm{n}}-p). \tag{7}\]
In Equation (7), we adopt the rate-strengthening rheology \(f_{\mathrm{rsf}}(\dot{\gamma})\) of _Platt et al._ (2014) detailed in Appendix B.2. Moreover, fluid diffusion across the shear zone is governed by a coupled system of thermal and hydraulic diffusion equations (see Equations 31 in Appendix). This thermo-hydraulic coupling is caused by the different compressibilities and thermal expansivities of the solid matrix and the pore fluid, and describes the change in pore pressure produced by local temperature variations in the gouge.
Exploring the interplay between strain localisation and rupture dynamics is a challenging dual-scale problem: it requires solving for heat and fluid diffusion at the scale of the fault core (from millimeters to centimeters in natural fault zones) together with the elastodynamics governing the propagation of the earthquake rupture along the fault (elastic waves moving at kilometers per second in crustal rocks). We follow _Noda et al._ (2009) and take advantage of the separation of scale to solve thermal pressurisation only across the fault (along the \(y\) axis in Figure 1). In this one-dimensional configuration, _Platt et al._ (2014) found numerically that shear localisation, under imposed constant slip rate \(V\) across the gouge, stabilises to a finite
width that is well approximated by
\[W_{\rm rsf}(V)\simeq 6.9\frac{A\rho c}{\Lambda f_{\rm c}}\frac{(\sqrt{\alpha_{\rm hy }}+\sqrt{\alpha_{\rm th}})^{2}}{V(f_{\rm c}+2A)}, \tag{8}\]
where \(\alpha_{\rm hy,th}\) correspond respectively to the hydraulic and thermal diffusivities, \(\rho c\) is the heat capacity of the Gouge material, \(\Lambda\) is the thermal pressurisation parameter that describes the change of pore fluid pressure caused by an increase in temperature in the Gouge. The characteristic shear strength
\[\tau_{\rm c}=f_{\rm c}(\sigma_{\rm n}-p_{0})=f_{\rm rsf}(V/h)(\sigma_{\rm n}-p_ {0}) \tag{9}\]
is a function of the initial uniform strain rate \(\dot{\gamma}=V/h\) and background pore pressure \(p_{0}\). In Equation (8), the constant \(A\) corresponds to the rate-strengthening coefficient that describes the "direct" response of the shear zone to a change in strain rate similar to standard rate-and-state models (see Equation (32) in Appendix). Once the localisation width \(W_{\rm rsf}\) is much smaller that the diffusion length scale, thermal pressurisation can be well approximated by a _slip-on-a-plane_ solution assuming \(\dot{\gamma}(y)=V\bar{\delta}(y)\), with \(\bar{\delta}\) being the Dirac delta distribution. In this situation, the shear stress across the shear zone is only function of the imposed slip rate \(V\) and the accumulated slip \(\delta=Vt\)(_Rice_, 2006; _Mase and Smith_, 1987):
\[\tau_{\rm sp}(\delta;V)=\tau_{\rm c}\exp\Big{(}\frac{\delta}{L^{*}}\Big{)}{ \rm erfc}\Big{(}\sqrt{\frac{\delta}{L^{*}}}\Big{)}, \tag{10}\]
where
\[L^{*}(V)=\frac{4}{f_{\rm c}^{2}}\Big{(}\frac{\rho c}{\Lambda}\Big{)}^{2}\frac {(\sqrt{\alpha_{\rm hy}}+\sqrt{\alpha_{\rm th}})^{2}}{V}. \tag{11}\]
During earthquakes, slip rate across the shear zone is far from being constant and evolves rapidly near the tip of the propagating rupture. Next, we aim to analyse the coupling between strain localisation, slip acceleration and rupture dynamics in a simple faulting geometry that is sufficient to capture its key physical aspects. We consider a planar fault within an infinite linear elastic medium sliding in anti-plane shear (mode-III). In this configuration shown in Figure 1, the shear traction at the fault \(\tau(x,t)\) can be related to the slip rate across the fault \(V(x,t)\) and, thereby, the strain rate in the shear zone, following
\[\tau(x,t)=\tau_{\rm b}-\frac{\mu}{2c_{\rm s}}V(x,t)+\phi(x,t)=\tau_{\rm b}- \frac{\mu}{2c_{\rm s}}\int_{-h/2}^{h/2}\dot{\gamma}(x,y,t)\,{\rm d}y+\phi(x,t). \tag{12}\]
In the equation above, \(\mu\) is the shear modulus, \(c_{\rm s}\) the shear wave speed of the linear elastic medium surrounding the shear zone, \(\phi\) is the non-local dynamic contribution accounting for the history of slip along the interface, and \(\tau_{\rm b}\) represents the far-field background stress. Equation (12) allows us to define the characteristic seismic slip rate \(V_{\rm c}\) and associated uniform strain rate \(\dot{\gamma}_{\rm c}\) as
\[\dot{\gamma}_{\rm c}=\frac{V_{\rm c}}{h}=\frac{2c_{\rm s}\tau_{\rm b}}{h\mu}, \tag{13}\]
which are used in the remainder of the paper together with the related characteristic shear strength \(\tau_{\rm c}\) (9). The elastodynamic equation (12) couples the strain rate in the shear zone \(\dot{\gamma}\)
to the shear stress \(\tau\) and allows us to implement a dual-scale coupled numerical scheme that solves the rupture elastodynamics along the shear zone together with pressure and temperature diffusion across the shear zone. The details of our coupled numerical scheme are given in Appendix D.
## 4 Results
In our simulations, the shear zone is initially creeping at aseismic slip velocity and, at time \(t=0\), failure is nucleated by rising the pore pressure near the center of the fault \(x=0\) (further details of nucleation procedure and parameter values are given in Appendix D). Initially, acceleration of slip is mostly concentrated in the nucleation region, followed by a rapid lateral rupture propagation whereby the slip rate increases in an expanding region beyond the initial nucleation patch, concomitantly with a shear stress drop linked with thermal pressurisation of pore fluids inside the gouge and intense strain localisation (Figure 2). Rupture acceleration coincides with larger slip velocities and stress drop at the tip (Figure 3a-c) and more intense
Figure 2: Dynamic rupture driven by shear localisation simulated with the coupled model. The top panels a) and b) respectively present snapshots at different times of the longitudinal profile of slip rate and shear stress during which the rupture accelerates from sixty to about ninety percent of the shear wave velocity. Note that the simulated domain is symmetric with respect to the nucleation position \(x=0\) such that another rupture tip moves toward the negative positions. The bottom panels c) and d) present the profile of strain rate \(\dot{\gamma}\), pressure \(p\) and temperature \(T\) at the successive positions of the rupture tip highlighted by black dots in panel a) and b). See Appendix C and Table 1 for further details on the dimensional analysis behind this coupled problem and the dimensionless scaling used to plot the data in the different panels.
localisation of shear deformation across the gouge where up to four orders of magnitude larger strain rate concentrates on less than five percent of the thickness of the shear zone (Figure 3b). Interestingly, the peak slip rate and drop of shear stress measured at different positions along the fault arise for the same characteristic slip value \(\delta_{\mathrm{loc}}\) and coincides with intense strain localisation. The observed value of \(\delta_{\mathrm{loc}}\) is identical to the one reported from one-dimensional simulation under imposed velocity and is in the order of magnitude of \(\delta_{\mathrm{c}}\) (see Figures 9 and 10 of _Platt et al._ (2014)). Remarkably, this observation enables us to apply the one-dimensional theory discussed in the previous section to derive predictions of the shear zone dynamics after strain localisation. For instance, the slip-on-a-plane solution described in Equation (10) can be used to capture the magnitude of the residual shear stress reached immediately after strain localisation \(\tau_{\mathrm{res}}\approx\tau_{\mathrm{sp}}(\delta=\delta_{\mathrm{c}};V=V_{ \mathrm{tip}})\), with \(V_{\mathrm{tip}}\) being the slip rate observed at the rupture tip (see Figure 3c and related caption). Moreover, once the localisation instability arises, the thickness of actively strained material at various positions along the interface collapses on a single \(W_{\mathrm{loc}}(V)\) curve, which follows the prediction given in Equation (8).
### Rupture dynamics driven by shear localisation
Next, we quantitatively demonstrate how strain localisation is the driving mechanism of the propagating rupture. To do so, we analyze snapshots of the propagating rupture and the near-tip evolution of the macroscopic and microscopic mechanical variables (Figure 4). Ahead of the propagating tip (point A), the shear zone is creeping with uniform shear strain rate. As the rupture approaches, the strain rate builds up uniformly across the gouge (point B) until the localisation instability arises (point C) together with a rapid increase in macroscopic slip rate \(V\) and abrupt drop of shear stress \(\tau\). In the wake of the rupture (point D), the profile of strain rate across the gouge progressively delocalises, following the decay of the macroscopic slip rate given by the prediction \(W_{\mathrm{rsf}}(V)\) shown in Figure 3b. The near-tip evolution of \(V\) and \(\tau\) is reminiscent to the singular solutions at the tip of a dynamic fracture (_Freund_, 1990). Defining \(\alpha_{\mathrm{s}}^{2}=1-v_{\mathrm{r}}^{2}/c_{\mathrm{s}}^{2}\), the analogy to linear elastic fracture mechanics (LEFM) can be quantitatively tested by rescaling the slip rate and stress according to
\[V\frac{\mu\alpha_{\mathrm{s}}}{2v_{\mathrm{r}}}=\tau-\tau_{\mathrm{res}}= \Delta\tau=\frac{K}{\sqrt{2\pi(x-x_{\mathrm{tip}})}} \tag{14}\]
and fitting the dynamic fracture solution following the procedure of _Barras et al._ (2020). The stress intensity factor \(K\), residual stress \(\tau_{\mathrm{res}}\) and position of the tip \(x_{\mathrm{tip}}\) are the free parameters that are fitted simultaneously to match the near-tip decrease of \(V\) behind the rupture tip and increase of \(\tau\) ahead of the rupture tip.
The good agreement with dynamic fracture solution (dashed blue curves in Figure 4) confirms the crack-like nature of the simulated rupture process near the tip of the slipping patch. Such agreement allows us to use the inverted value of \(K\) and invoke the crack-tip energy balance to compute the rupture energy
\[G_{\mathrm{c}}=\frac{K^{2}}{2\mu\alpha_{\mathrm{s}}}, \tag{15}\]
which corresponds to the part of dissipated energy that governs the propagation of the rupture. In seismology, extracting the fracture energy of natural earthquakes still eludes the resolution of
Figure 3: Time evolution of the elastic variables at different locations along the interface during the dynamic rupture shown in Figure 2. Line colors relate to the positions along the interface \(x/L_{\rm c}\) and the associated propagation speeds of the rupture \(v_{\rm r}/c_{\rm s}\), whereas the arrows point to the direction of forward time evolution. Slip rate (a) and shear stress (c) versus slip revealing how the peak slip rate is associated to abrupt stress drop and arise at the same amount of cumulated slip \(\delta_{\rm loc}\approx 0.3\delta_{\rm c}\). (b) Slip rate versus width of strain rate localisation \(W_{\rm loc}\) measured from the \(\dot{\gamma}(y)\) profiles following the procedure shown in Figure 3 of _Platt et al._ (2014). The different post-peak delocalisation trajectories collapse along a single prediction given in Equation (8). The dashed lines in panel (c) correspond to the prediction \(\tau_{\rm sp}(\delta;V_{\rm tip})\) and gives a good prediction of the residual shear stress reached after strain localisation. The slip rate at the rupture tip \(V_{\rm tip}\) is approximated by \(V\) at the mid-time between the peaks in shear stress and in slip rate. (A more precise definition of the tip position is discussed and computed later in the context of Figure 4.)
Figure 4: Snapshot near the tip of the propagating rupture shown in Figure 2. Bottom panel presents the spacial evolution of the shear stress and slip rate, which are simultaneously fitted by the fracture mechanics prediction shown by the dashed blue curve. (See the main text for details on the fitting procedure). Top panels show the strain rate profile across the shear zone observed at the instants A,B,C and D corresponding to the black dots in the bottom panel.
seismic inversions, such that the _breakdown work_ is often used as a proxy for \(G_{\mathrm{c}}\) and integrates the excess of work on top of residual friction (_Tinti et al._, 2005; _Cocco et al._, 2023). For systems where frictional weakening does not necessarily reach a well-defined residual value, the breakdown work is defined as (_Abercrombie and Rice_, 2005):
\[E_{\mathrm{BD}}(\delta)=\int_{0}^{\delta}\Big{(}\tau(\delta^{\prime})-\tau( \delta)\Big{)}\,\mathrm{d}\delta^{\prime}. \tag{16}\]
In our numerical simulations, the integration of \(E_{\mathrm{BD}}\) at different locations along the interface reveals a clear plateau over an order of magnitude in slip (Figure 5), which indicates the portion of \(E_{\mathrm{BD}}\) that corresponds to \(G_{\mathrm{c}}\) following _Brener and Bouchbinder_ (2021). Remarkably, we can then quantitatively verify that the two independent estimates of the rupture energy (from the near-tip singularity and the integration of the breakdown work) are in excellent agreement (gray horizontal line in Figure 5) as another proof of the crack-like nature of the rupture dynamics. Furthermore, the observed plateau in \(E_{\mathrm{BD}}\) is clearly associated to the rapid stress drop caused by localisation instability (see \(\tau(\delta)\) profile in Figure 3c) and confirms that rapid strain localisation is the driving mechanism of the propagating rupture. In addition, the magnitude of \(G_{\mathrm{c}}\) associated to strain localisation is more than five times smaller than that expected from uniform shearing under adiabatic undrained conditions (\(\sim\tau_{\mathrm{c}}\delta_{\mathrm{c}}\)).
The interplay between strain localisation and rupture dynamics can be further established by relating the thinnest localisation width observed at a given location along the interface to the local speed of the rupture (Figure 6). Following the behavior reported in Figure 3a, the dynamic stress drop caused by the localisation instability can be well estimated by the slip-on-a-plane solution (10). Together with the elastodynamic relation (14), it relates the slip rate near the rupture tip \(V_{\mathrm{tip}}\) to the rupture speed \(v_{\mathrm{r}}\), which can further be combined to the solution \(W_{\mathrm{rsf}}(V_{\mathrm{tip}})\) of Equation (8):
\[\begin{cases}V_{\mathrm{tip}}=\dfrac{2v_{\mathrm{r}}}{\mu\alpha_{\mathrm{s}}} \Big{(}\tau_{\mathrm{c}}-\tau_{\mathrm{sp}}(\delta_{\mathrm{c}};V_{\mathrm{tip }})\Big{)},\\ W_{\mathrm{loc}}=W_{\mathrm{rsf}}(V_{\mathrm{tip}}).\end{cases} \tag{17}\]
The implicit relation above provides a universal relationship between \(v_{\mathrm{r}}\) and the degree of localisation \(W_{\mathrm{loc}}\) observed at the scale of the gouge. Its good agreement with different simulations data (Figure 6) unveil the dynamical feedback loop at play between strain localisation and rupture dynamics: the dynamic stress drop caused by strain localisation drives the acceleration of slip, which further amplifies the dynamic stress drop and promotes rupture acceleration.
## 5 Discussion
Our simulations demonstrate that strain localisation produces a loss of stress-bearing capacity of shear zones that can create and sustain earthquake rupture. The abrupt drop of shear stress produces an accelerating crack-like rupture in agreement with the predictions of LEFM. Notably, the rupture is driven by a well-defined fracture energy that corresponds to the edge-localised dissipation during the localisation process.
Such a behaviour is in contrast with that of ruptures driven by thermal pressurisation without internal strain localisation (_Viesca and Garagash_, 2015), for which breakdown work uniformly
Figure 5: Breakdown energy integrated from the \(\tau\) versus \(\delta\) profiles at different positions along the fault, for the simulation shown in Figure 3 (labelled “strong strain localisation”, blue dots) and for another simulation with the same parameters but larger hydraulic diffusivity (labelled “weak strain localisation”, brown dots). The rupture simulated with larger hydraulic diffusivity shows very weak strain localisation. (Additional plots of the dynamics observed during that simulation are given in Figures 7 and 8 of the Appendix). The rapid loss of stress caused by strain localisation creates an horizontal plateau whose associated magnitude is well predicted by the rupture energy inverted from the dynamic fracture fit shown in Figure 4 and highlighted here by the horizontal gray line.
Figure 6: Minimum strain rate localisation width \(W_{\rm loc}\) versus instantaneous rupture speed \(v_{\rm r}\) computed during rupture propagation for several simulations using the same parameters but different background stresses. Heterogeneous simulations with steps in background stress were conducted with an initial plateau of \(\tau_{\rm b}/\tau_{\rm c}=0.59\) around the nucleation zone, and a sharp drop down to a smaller value at position \(x/L_{\rm c}=\pm 11.52\) away from the center of the nucleation region. In heterogeneous simulations, rupture speeds may vary nonmonotonically during propagation, initially increasing and subsequently decreasing when encountering a large downstep in stress. Regardless of the details of the dynamics, the relationship between peak localised width and rupture speed is well approximated by the theoretical prediction proposed in Equation (17).
increases with increasing slip, i.e., without a well-defined residual strength at any point within the propagating rupture. Similarly, the integrated breakdown work for simulated ruptures that feature weak strain localisation lack a well-defined edge-localised rupture energy (Figure 5). In this case, the rupture tip singularity significantly deviates from fracture mechanics predictions (14) and the rupture dynamics is no longer governed by local near-tip energy balance (_Brener and Bouchbinder_, 2021), as further discussed in Appendix D.6. Far from the rupture tip, our simulations show further shear weakening driven by the diffusion process (2) that continues at a slower pace as strain delocalises, and we approach again the slip-on-a-plane asymptotics described by _Viesca and Garagash_ (2015).
Strain localisation within a preexisting gauge material is strongly correlated to the dynamics of fault slip, and specifically to the rupture speed (Figure 6). The degree of strain localisation increases with increasing rupture speed, with a narrowing of the deformed, heated and pressureised region, approaching 1/1000th of the initial shear zone width. Despite the complexity of the problem, quantitative estimates can be obtained by a simple analytical approximation (Equation 8) adapted from _Platt et al._ (2014), so that the original predictions for peak localised width listed in _Rice et al._ (2014); _Platt et al._ (2014) still apply. Ideally, we could use the relationship between width and rupture speed depicted in Figure 6 to interpret the localisation features observed in the geological record in terms of rupture dynamics. However, strain localisation in rocks is not exclusively associated with dynamic ruptures and fast slip rates (e.g. _Evans and Kohlstedt_, 1995), and only careful micro- and nano-structural studies can be relied upon to determine the seismic nature of geological structures, notably via detection of features characteristic of frictional heating (_Rowe and Griffith_, 2015). Keeping this caveat in mind, our results highlight that the degree of strain localisation may be used as a complementary indicator of seismic slip: indeed, simulations leading to dynamic ruptures are always associated with strong localisation, with typical width in the sub-millimeter range (see also _Daub et al._, 2008).
In this paper, we chose thermal pressurisation as the driving mechanism for localisation and implemented a numerical scheme coupling small-scale diffusive processes across the shear zone to long-range elastodynamic coupling along the shear zone. Our results can however be generalized to other type of localisation instability arising within shear zones where (1) the shear stress in the shear zone is function of a set of variables that includes the shear strain rate \(\dot{\gamma}\) and another diffusive quantity \(\vartheta\) (Equation 1), and (2) the rate of work produced by shearing acts as a source term in the diffusion of \(\vartheta\) (Equation 2). Importantly, shear localisation can produce and sustain rupture in shear zones having a rate-strengthening rheology (\(g_{0}^{\prime}>0\)) often interpreted as a token of stability and aseismic slip.
If the conditions (5) and (6) are fulfilled, a localisation instability can develop and lead to an abrupt drop of shear stress, which leads to the emergence of a well-defined edge-localised fracture energy and LEFM-like rupture. Far from the tip, any diffusion-driven weakening leads to \(E_{\mathrm{BD}}\sim\delta^{2/3}\) at large slip (_Brantut and Viesca_, 2017). Therefore, the behavior summarized in Figure 5 is expected to arise for any type of localisation-driven rupture, including those where the rheology is controlled by temperature, such as superplasticity (e.g. _Green et al._, 2015; _De Paola et al._, 2015; _Pozzi et al._, 2021; _Harbord et al._, 2021). Indeed, simulations of high speed deformation in metals, which are also rate-hardening and temperature-sensitive, tend to exhibit similar characteristics, with the emergence of a localisation-driven dissipation at the edge of
propagating shear bands (_Bonnet-Lebownier et al._, 2002).
Our work demonstrates how localisation instabilities arising across a creeping shear zone create an abrupt drop of shear stress that promotes the propagation of classical dynamic ruptures over large distances along the shear zone. Whether frictional systems are governed by classical fracture mechanics or by nonlinear friction is an important and debated question in geophysics (e.g. _Svetlizky et al._, 2019; _Lambert et al._, 2021; _Paglialunga et al._, 2022; _Cocco et al._, 2023). Strain localisation is an abrupt structural weakening mechanism that provides a clear separation between the cohesive zone and the interior of the slipping patch, hence justifying the small-scale yielding hypothesis. However, the relative simplicity of the rupture tip behaviour does not preclude any complexity of the overall rupture style. Away from the rupture tip, thermal and hydraulic diffusion and strain delocalisation maintain a slow decay of the shear stress, which is prone to impact how earthquake ruptures stop (_Paglialunga et al._, 2022).
|
2309.10725 | Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI | In this paper, we present a novel method to automatically classify medical
images that learns and leverages weak causal signals in the image. Our
framework consists of a convolutional neural network backbone and a
causality-extractor module that extracts cause-effect relationships between
feature maps that can inform the model on the appearance of a feature in one
place of the image, given the presence of another feature within some other
place of the image. To evaluate the effectiveness of our approach in low-data
scenarios, we train our causality-driven architecture in a One-shot learning
scheme, where we propose a new meta-learning procedure entailing meta-training
and meta-testing tasks that are designed using related classes but at different
levels of granularity. We conduct binary and multi-class classification
experiments on a publicly available dataset of prostate MRI images. To validate
the effectiveness of the proposed causality-driven module, we perform an
ablation study and conduct qualitative assessments using class activation maps
to highlight regions strongly influencing the network's decision-making
process. Our findings show that causal relationships among features play a
crucial role in enhancing the model's ability to discern relevant information
and yielding more reliable and interpretable predictions. This would make it a
promising approach for medical image classification tasks. | Gianluca Carloni, Eva Pachetti, Sara Colantonio | 2023-09-19T16:08:33Z | http://arxiv.org/abs/2309.10725v1 | # Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI +
###### Abstract
In this paper, we present a novel method to automatically classify medical images that learns and leverages weak causal signals in the image. Our framework consists of a convolutional neural network backbone and a causality-extractor module that extracts cause-effect relationships between feature maps that can inform the model on the appearance of a feature in one place of the image, given the presence of another feature within some other place of the image. To evaluate the effectiveness of our approach in low-data scenarios, we train our causality-driven architecture in a One-shot learning scheme, where we propose a new meta-learning procedure entailing meta-training and meta-testing tasks that are designed using related classes but at different levels of granularity. We conduct binary and multi-class classification experiments on a publicly available dataset of prostate MRI images. To validate the effectiveness of the proposed causality-driven module, we perform an ablation study and conduct qualitative assessments using class activation maps to highlight regions strongly influencing the network's decision-making process. Our findings show that causal relationships among features play a crucial role in enhancing the model's ability to discern relevant information and yielding more reliable and interpretable predictions. This would make it a promising approach for medical image classification tasks.
## 1 Introduction
Building models that automatically perform diagnoses from medical data could revolutionize a patient's clinical pathway, especially in oncology, minimizing invasive procedures and maximizing the chances of cures for the most severe cases. The implementation of such models in the medical field is severely limited by data availability and the heavy domain shifts that plague the data, which makes the models non-generalizable. Especially in the magnetic resonance imaging (MRI) domain, different vendors yield images with different features, and that prevents the model from being able to generalize well. In a classical fully-supervised manner, the problem can be overcome by training a model with lots of data that covers all the possible data distribution, In practice, however, this is not always possible, due to the limited amounts of labeled/annotated data that affect the medical imaging domain.
Especially when dealing with limited data, one may wish to make an automated recognition model focus on the most discriminative regions of the image instead of paying attention to side details. Traditional convolutional neural networks (CNN) and transformer networks, for instance, can discover features hidden within input data together with their mutual co-occurrence. However, they are weak at discovering and making explicit hidden causalities between the features, which could be the reason behind a particular outcome. Indeed, image classification models are expected to distinguish between the classes of images taking into account the causal relationships between the features from the images, in a way that might resemble how humans accomplish the task. For this reason, several bridges between the field of causality and computer vision are emerging in the literature [14, 1].
A particular case would be discovering hidden causalities among objects in an image dataset. Unlike tabular data, which have a structured nature, when it comes to images, their representation does not include any explicit indications regarding objects or patterns. Instead, individual pixels convey a particular scene visually, and image datasets do not provide labels describing the objects' dispositions. Due to this, supervised machine learning as such cannot approach
them. Additionally, unlike video frames, from a single image one may not see the dynamics of appearance and change of the objects in the scene. Therefore, a priori information as a hint for causal discovery is absent.
To approach the problem of learning hidden causalities within images, Lopez-Paz _et al_. [7] suggest the "causal disposition" concept as more primitive than interventional causation (do-calculus) and causal graphs from Pearl's approach [11, 12]. However, it could be the only way to proceed with limited a priori information. In their view, by counting the number \(C(A,B)\) of images in which the causal dispositions of artifacts \(A\) and \(B\) are such that \(B\) disappears if one removes \(A\), one can assume that artifact \(A\) causes the presence of artifact \(B\) when \(C(A,B)\) is greater than the converse \(C(B,A)\). As a trivial example, imagine the image of a car on a bridge. Now, if we were to remove the car, then this would keep the image realistically looking (i.e., scene consistency), since an observer may see similar scenes among other images. Conversely, if we were to remove the bridge, then this would make the scene inconsistent, as that scenario is likely never seen among other images (i.e., flying cars). Therefore, we may assume that the presence of the bridge has some effect on the presence of the car. This concept leads to the intuition that any causal disposition induces a set of asymmetric causal relationships between the artifacts from an image (features, object categories, etc.) that represent (weak) causality signals regarding the real-world scene. To automatically infer such an asymmetric causal relationship from the statistics observed in an image dataset would be a meeting point with a machine vision system.
In this work, we combine a regular CNN with a causality-extraction module to investigate the features and causal relationships between them extracted during training. We build on ideas from [18] who suggest a way to compute such an asymmetric measure for possible causal relationships within images, and we propose a new scheme based on feature maps enhancement to enable "causality-driven" CNNs to classify images taking into account hidden causalities within them. Our hypothesis is that it would be possible and reasonable to get some weak causality signals from the individual images of some medical datasets without adding primary expert knowledge, and leveraging them to better guide the learning phase. Ultimately, a model trained in such a manner would be able to exploit weak causal dispositions of objects in the image scene to distinguish lesion grades even with limited data and possibly domain shift on the test set.
To evaluate how these causality-driven networks could behave in a low-data regime, we perform the training in a Few-shot learning (FSL) manner, and in particular in One-shot learning (OSL). Here, we propose a novel training scheme in which we design meta-training and meta-testing tasks having related classes (i.e., the same clinical problem is addressed) but at different granularity levels. To perform such experiments, we exploit the Deep Brownian Distance Covariance (DeepBDC) [19] method.
Our paper is structured as follows. After citing relevant works related to our topic in Sec. 2, we present the rationale behind causality-driven CNNs and our network proposition in Sec. 3.1, together with a description of the DeepBDC method of our choice in Sec. 3.2. Later, in Sec. 4, we dive into the details of our experiments settings, regarding both the dataset used and the meta-training and meta-testing schemes. Finally, in Sec. 5 and Sec. 6, we provide the results of our experiments and discuss our findings, summarizing our key conclusions.
## 2 Related works
Several approaches to integrating causality into FSL have been proposed in the literature, leading to different directions and applications. One notable example is the work by Yue _et al_. [22], where they leverage causality to demonstrate that pre-training in FSL can act as a confounder, resulting in spurious correlations between samples in the support set and their corresponding labels. To address this issue, they propose a novel FSL paradigm in which they perform causal interventions on the Structural Causal Model (SCM) of many-shot learning employing a backdoor adjustment approach. Based on that work, Li _et al_. [6] propose a method to mitigate the influence of confounders during the pre-training phase of the prototypical network [17]. They accomplish this by stratifying the pre-trained knowledge using a backdoor adjustment based on causal intervention. Specifically, the backdoor adjustment operations are applied in the metric layer of the prototypical network. The feature vectors of the class prototype and the query set are divided into N equal-size disjoint subsets, and the corresponding subsets are fed into their respective classifiers. The final prediction result is obtained by averaging the prediction probabilities from the N classifiers. Furthermore, in the work by Yang _et al_. [20], the authors propose a method to enhance the robustness and generalizability of few-shot text classification. They achieve this by extracting causal associations from text using a causal representation framework for FSL. The process involves performing causal interventions to generate new data with label-relevant information for each input. The original and augmented texts turned into feature representations are fed into a factorization module, which enforces the separation and joint independence of the representations from non-causal factors. Finally, the feature representations are utilized by a classification module for making predictions.
## 3 Methods
### Causality-driven CNNs
Preliminaries.In automatic image recognition, deep neural network classifiers obtain the essential features required for classification not directly from the pixel representation of the input image but through a series of convolution and pooling operations. These operations are designed to capture meaningful features from the image. Convolution layers are responsible for summarizing the presence of specific features in the image and generating a set of feature maps accordingly. As these maps are sensitive to the spatial location of features in the image, pooling is employed to consolidate the presence of particular features within groups of neighboring pixels in square-shaped sub-regions of the feature map.
Causality signals in images.When a feature map \(F^{i}\) contains only non-negative numbers (e.g., thanks to ReLU functions) and is normalized in the interval \([0,1]\), we can interpret its values as probabilities of that feature to be present in a specific location, for instance, \(F^{i}_{r,c}\) is the probability that the feature \(i\) is recognized at coordinates \(r,c\). By assuming that the last convolutional layer outputs and localizes to some extent the object-like features, we may modify the architecture of a CNN such that the \(n\times n\) feature maps (\(F^{1},F^{2},\dots F^{k}\)) obtained from that layer are fed into a new module that computes pairwise conditional probabilities of the feature maps. The resulting \(k\times k\) causality map would represent the causality estimates for the features.
Computing causality maps.Given a pair of feature maps \(F^{i}\) and \(F^{j}\) and the formulation that connects conditional probability with joint probability, \(P(F^{i}|F^{j})=\frac{P(F^{i};F^{j})}{P(F^{j})}\), following [18], we heuristically estimate this quantity regarding the pairs of features by adopting two possible methods, namely _Max_ and _Lehmer_. The _Max_ method considers the joint probability to be the maximal presence of both features in the image (each one in its location):
\[P(F^{i}|F^{j})=\frac{(\max_{r,c}F^{i}_{r;c})\cdot(\max_{r,c}F^{j}_{r,c})}{\sum _{r,c}F^{j}_{r,c}} \tag{1}\]
On the other hand, the _Lehmer_ method entails computing
\[P(F^{i}|F^{j})_{p}=\frac{LM_{p}(F^{i}\times F^{j})}{LM_{p}(F^{j})} \tag{2}\]
where \(F^{i}\times F^{j}\) is a vector of \(n^{4}\) pairwise multiplications between each element of the two \(n\times n\) feature maps, while \(LM_{p}\) is the generalized Lehmer mean function [2] with parameter \(p\), which is an alternative to power means for interpolating between minimum and maximum of a vector \(x\) via harmonic mean (\(p=-2\)), geometric mean (\(p=-1\)), arithmetic mean (\(p=0\)), and contraharmonic mean (\(p=1\)): \(LM_{p}(x)=\frac{\sum_{k=1}^{n}x_{k}^{k}}{\sum_{k=1}^{n}x_{k}^{k}-1}\). Equations 1 and 2 could be used to estimate asymmetric causal relationships between features \(F^{i}\) and \(F^{j}\), since, in general, \(P(F^{i}|F^{j})\neq P(F^{j}|F^{i})\). By computing these quantities for every pair \(i\) and \(j\) of the \(k\) feature maps, the \(k\times k\) causality map is obtained. We interpret asymmetries in such probability estimates as weak causality signals between features, as they provide some information on the cause-effect of the appearance of a feature in one place of the image, given the presence of another feature within some other places of the image. Accordingly, a feature may be deemed to be the reason for another feature when \(P(F^{i}|F^{j})>P(F^{j}|F^{i})\), that is (\(F^{i}\to F^{j}\)), and vice versa.
Embedding causality in regular CNNs.Once the causality map is computed, it can be embedded into the basic CNN architecture. Terziyan and Vitko [18] flatten these suggested causality estimates, concatenate them to the set of flattened feature maps, and let the CNN learn how these estimates might influence image classification. Differently from them, we exploit the causality map in a new way and get a weighting vector to enhance or penalize the single feature maps during training.
Causality weights.At each epoch, as training progresses, we look for asymmetries between elements opposite the main diagonal of the causality map. Some features may be more often found on the left side of the arrow (i.e., \(F\rightarrow\)) than on the right side (i.e., \(\to F\)). Therefore, we use such learned causalities to compute causality weights to assign a degree of importance to each feature map. Specifically, for each feature map, we take as its weighting factor the difference between the number of times it was found to cause other feature maps and the number of times it was found to be caused by another feature map. Computing such quantity for every feature results in a vector of causality factors, which is then passed through a ReLU activation to set to zero all the negative elements.
Models.We propose two variants of the model:
* **mulcat** (_multiply and concatenate_). The non-negative causality factors multiply the corresponding feature maps, resulting in a causality-driven version of these feature maps. In this enhanced version, each feature map is strengthened according to its causal influence within the image's scene. Those features are merged with the original features by concatenation along the channel axis and form the final feature set that influences the classification outcomes.
* **mulcatbool**. Same as the previous, but before multiplication, the factors undergo boolean thresholding where all the non-zero factors are assigned a new weight of \(1\), while \(0\) otherwise.
The first method weighs features more according to their causal importance (a feature that is _cause_\(10\) times more than another receives \(10\) times more weight). In contrast, the second method is more conservative and assigns all features that are most often _causes_ the same weight. We experiment with both of them and compare their results.
### One-shot learning
In standard FSL, the training process occurs in episodes or tasks. Each task is formulated as an _\(N\)-way_\(K\)-_shot_ classification problem, where \(N\) represents the number of classes, and \(K\) is the number of _support_ images per class. We refer to \(Q\) as the number of _query_ images per class. For our experiments, we specifically focused on _N-way 1-shot_ classification, and we employed the DeepBDC method introduced by Xie _et al_. [19]. In particular, we utilized the meta-learning implementation of DeepBDC, known as Meta DeepBDC. DeepBDC is a metric-based FSL method that employs the BDC as the distance measure between prototypes. The BDC is defined as the Euclidean distance between the joint characteristic function and the product of the marginal of two random variables \(X\in\mathbb{R}^{p}\) and \(Y\in\mathbb{R}^{q}\). Following [19], we provide a more formal definition of BDC:
\[\rho(X,Y)=\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{q}}\frac{|\Phi_{XY}(t,s)-\Phi _{X}(t)\Phi_{Y}(s)|^{2}}{c_{p}c_{q}|t|\|^{1+p}\|s\|^{1+q}}dtds, \tag{3}\]
where \(\Phi_{X}(t)\) and \(\Phi_{Y}(s)\) are the marginal distributions of X and Y, respectively, \(\Phi_{XY}(t,s)\) is the joint characteristic function of the two random variables and \(c_{p}\) is defined as \(c_{p}=\pi^{(1\pm p)/2}/\Gamma((1+p)/2)\), where \(\Gamma\) is the complete gamma function. DeepBDC has demonstrated higher performance compared to state-of-the-art methods while being straightforward to deploy since it can be implemented as a parameter-free spatial pooling layer that accepts feature maps as input and provides a BDC matrix.
## 4 Experiments
### Dataset and pre-processing
In our study, we conducted meta-training, meta-validation, and meta-testing using the publicly available \(1500\)-acquisition dataset from the PI-CAI challenge [13]. This dataset comprises mpMRI acquisitions of the prostate, and for our experiments, we focused exclusively on cancerous patients. In particular, we selected only T2-weighted (T2w) images containing lesions by exploiting the expert annotations provided in the dataset. The dataset contained biopsy reports expressing the severity of each lesion as Gleason Score (GS). The pathologists assign a score of \(1\) to \(5\) to the two most common patterns in the biopsy specimen based on the tumor severity. The two grades are then added together to determine the GS, which can assume all the combinations of scores from "\(1\)+\(1\)" to "\(5\)+\(5\)". Additionally, the dataset included the assigned GS's group affiliation, defined by the International Society of Urological Pathology (ISUP) [4], ranging from \(1\) to \(5\), which provides the tumor severity information at a higher granularity level. From an even more high-level perspective, lesions with a GS \(\leq 3+3\) (ISUP = \(1\)) and with GS \(=3+4\) (ISUP = \(2\)) are considered low-grade (LG) tumors, as patients with such lesions typically undergo active surveillance [9]. Conversely, lesions with GS \(>3+4\) (ISUP \(>\)\(2\)) are high-grade (HG) tumors, as treatment is foreseen [9]. In this study, we considered only lesions whose GS was \(\geq 3+4\) (ISUP \(\geq 2\)), as lesion annotations were not provided for ISUP-\(1\) lesions. As a result, we had eight classes of GS and four classes of ISUP in our dataset. The total number of images we used was \(2049\) (from \(382\) patients), which we divided into training, validation, and testing subsets. Specifically, we used \(1611\) images for training, \(200\) for validation, and \(238\) for testing. During the splitting process, we ensured patient stratification, i.e., all the images of the same patient were grouped in the same subset, avoiding any data leakage. To replicate a realistic scenario involving distinct distributions in training and testing data, we utilized data from two different vendors: SIEMENS vendor data for meta-training and Philips vendor data for both meta-validation and meta-testing. Indeed, we chose the same validation and test distributions since, as highlighted by Setlur _et al_. [16], using validation samples that are not independent and identically distributed with the test samples can lead to unreliable results when determining the optimal model, specifically the one that maximizes performance on the test set.
As for the data pre-processing, we utilized the provided whole prostate segmentation to extract the mask centroid for each slice. We standardized the field of view (FOV) at \(100\) mm in both \(x\) (\(FOV_{x}\)) and \(y\) (\(FOV_{y}\)) directions to ensure consistency across all acquisitions and subsequently cropped each image based on this value around the found centroid. To determine the number of rows (\(N_{rows}\)) and columns (\(N_{cols}\)) corresponding to the fixed FOV, we utilized the pixel spacing in millimeters along the x-axis (denoted as \(px\)) and the y-axis (denoted as \(py\)). The relationships used to derive the number of columns and rows are \(N_{cols}=\frac{FOV_{x}}{px}\) and \(N_{rows}=\frac{FOV_{y}}{py}\), respectively. Additionally, we resized all images to a uniform matrix size of \(128\times 128\) pixels to maintain a consistent pixel count. Finally, we performed image normalization using an involume method. This involved calculating the mean and
standard deviation (SD) of all pixels within the volume acquisition and normalizing each image based on these values using a z-score normalization technique.
### Classification experiments
In our study, we conducted experiments on two classification scenarios: (i) distinguishing between LG and HG lesions and (ii) ISUP grading. For each scenario, we carefully designed meta-training and meta-testing tasks. Most FSL approaches typically involve using unrelated classes between meta-training and meta-testing tasks [3, 8, 10]. Instead, we propose a different approach where we focus on the same clinical problem but at varying levels of granularity. Specifically, during meta-training, we designed more challenging tasks, requiring the model to distinguish between classes with higher levels of granularity. Conversely, during meta-testing, we provided the model with easier classification tasks involving higher-level classification. Our rationale is that this approach would lead to higher performance on the specific task of interest performed during meta-testing, as the model would find it relatively easier to execute due to its exposure to more complex tasks during meta-training. Below we provide a detailed explanation of how we designed our meta-training and meta-testing tasks for both experiments.
In the first scenario, we labeled the meta-training data according to the four ISUP classes. The model performed binary classification in each meta-training task between two randomly selected classes from the four provided. However, during meta-testing, the model was tasked with a higher-level classification, namely, distinguishing between LG and HG lesions. For ease of reference, we will refer to this experiment as the _2-way_ experiment. In the second scenario, we labeled the meta-training data based on the GS. Each training task required the model to distinguish between four randomly selected GS classes out of the total eight. For meta-validation and meta-testing, we labeled each patient based on the ISUP tumor severity score and made the model distinguish across these four classes. Henceforth, we will refer to this experiment as _4-way_ experiment. We summarized the labeling procedure for the two experiments in Table 1.
In both scenarios, we employed a one-shot setting for both meta-training and meta-testing. This means that the model only observed a single example per class in the support set of each task. However, during the evaluation phase, we expanded the query set by utilizing ten samples per class in both meta-training and meta-testing tasks.
### Architecture and training
Many widely used architectures for large-scale image recognition incorporate an adaptive average pooling layer with an output size of \(1\times 1\) placed just before the classifier. Its primary advantage is the ability to accommodate input images of varying sizes, as it automatically adjusts its parameters (such as kernel size, stride, and padding) to ensure that the output is consistently \(1\times 1\) in shape. This dimensionality reduction, however, conflicts with the 2D nature of feature maps for computing causalities. Therefore, since we chose the ResNet18 as the backbone architecture in our work, we substituted its _AdaptiveAvgPool2D_ layer with an identity layer in our experiments. We performed an optimization over the method by which computing the causality maps (i.e., _Max_ or _Lehmer_) and, for the _Lehmer_ case, over six different values of its parameter \(p\): [\(-100,-2,-1,0,1,100\)]. Accordingly, we trained seven models for each causality setting (i.e., _mulcat_ or _mulcatbool_), resulting in \(14\) causality-driven models plus one baseline model for each experiment (i.e., 2-way and 4-way). For each of the two causality settings, we chose the best-performing model on the meta-validation set.
Given an input image, the causality-driven ResNet18 extracts \(512\) bidimensional feature maps of shape \(4\times 4\). While those features proceed along the main branch of the network, a copy of them enters the causality module where for each pair, we extract their conditional probabilities by either applying Eq. 1 or Eq. 2 depending on the causality method of choice. Starting from the resulting \(512\times 512\) causality map, the vector of \(512\) causality factors is obtained according to the model variant of choice (i.e., _mulcat_ or _mulcatbool_) and then multiplied for the corresponding feature maps. Then, after concatenation of two such feature sets, we obtain a set of \(1024\) feature maps of shape \(4\times 4\) for each input image. Figure 1 shows the proposed causality-driven network. Although the training is performed task by task, here we represented the functioning of our method for just one input image.
At this point, the final set of feature maps is used to calculate the image representations. Following the Prototypical Networks [17] approach, the classification is performed by computing the BDC between the prototypes of each supported class, calculated as the mean of the BDC matrix of each support image of that class and each query image representation. To infuse the model with robustness to different data selections, we performed \(600\) meta-training tasks, \(600\) meta-validation tasks, and \(600\) meta-testing tasks for each experiment. As our loss function and optimizer, we employed the AUC margin loss (AUCM) [21] and the Proximal epoch stochastic method (PESG) [5], respectively. These were employed to maximize the Area Under the ROC curve (AUROC), which is more stable w.r.t accuracy to the dataset unbalancing. In addition, we performed our experiments with the following hyperparameters: initial learning rate = \(1e-2\), weight decay = \(1e-2\), number of epochs = \(100\), decay epochs: [20,80]. At each decay epoch, the learning rate value is divided by \(10\).
### Evaluation metrics
In our evaluation, we utilized the AUROC as the performance metric for all our experiments. For the \(2\)-way experiment, we computed the classical binary AUROC by considering the HG as the positive class. In the case of the \(4\)-way experiment, instead, we calculated the AUROC using the _One-vs-rest_ setting. This approach involves computing the AUROC for each class against the rest of the classes independently. In addition, we evaluated the binary classification performance of the models in the \(4\)-way experiment by computing the AUROC of ISUP class \(2\) versus all the rest.
### Visualizing the impact of causality
To test the hypothesis that a causally trained model can learn more discriminative representations for image classes, we performed post hoc explainability evaluations. Investigating the variability of visualization results with different choices of post hoc explainable AI (XAI) methods is beyond the scope of our work, therefore, we employed the popular Grad-CAM [15], which belongs to the broad literature on class activation maps (CAM) The basic idea behind Grad-CAM is to utilize the gradients flowing back from a chosen layer of a CNN to understand which parts of the image contribute the most to the activation of a particular class. In this case, we chose to compute the Grad-CAM heatmaps at the BDC module level, which takes as input the final set of feature maps. In addition, to make a fair comparison, we computed the heatmaps w.r.t. the ground-truth target of each image and only in the cases for which the prediction was performed correctly by both the non-causality-driven model and the mulcat and mulcatbool models.
## 5 Results
### Performance of causality-driven CNNs
The main results of our analysis are reported in Table 2. We reported all those values as mean and SD AUROC
\begin{table}
\begin{tabular}{c|c|c}
**Experiment** & **Splitting** & **Labels** \\ \hline \multirow{3}{*}{2-way} & Meta-training & ISUP \(2\), ISUP \(3\), ISUP \(4\), ISUP \(5\) \\ \cline{2-3} & Meta-validation & **LG** (ISUP \(2\)) - **HG** (ISUP \(3\), ISUP \(4\), ISUP \(5\)) \\ \cline{2-3} & Meta-test & **LG** (ISUP \(2\)) - **HG** (ISUP \(3\), ISUP \(4\), ISUP \(5\)) \\ \hline \multirow{3}{*}{4-way} & Meta-training & GS \(3+4\), GS \(4+3\), GS \(4+4\), GS \(3+5\), GS \(5+3\), GS \(4+5\), GS \(5+4\), GS \(5+5\) \\ \cline{2-3} & Meta-validation & ISUP \(2\), ISUP \(3\), ISUP \(4\), ISUP \(5\) \\ \cline{1-1} \cline{2-3} & Meta-test & ISUP \(2\), ISUP \(3\), ISUP \(4\), ISUP \(5\) \\ \end{tabular}
\end{table}
Table 1: A summary of the labeling procedure according to our training approach. ISUP = International Society of Urological Pathology, LG = Low Grade, HG = High Grade, GS = Gleason Score.
Figure 1: Causality-Driven ResNet18 for prostate cancer Grading from MRI. Here, the _causality map_ can be computed with one of _Max_ and _Lehmer_ options, while the _causality factors_ can be computed using either _mulcat_ or _mulcatbool_ methods. For visualization purposes, the size of the box representing the causality map has been reduced.
across all the \(600\) meta-test tasks. Concerning the \(2\)-way experiment (i.e., LG vs HG), the baseline model achieved \(0.539\) (\(0.141\)), and embedding the causality module led the model to improve, obtaining \(0.550\) (\(0.144\)) and \(0.556\) (\(0.141\)) AUROC for the _mulcat_ and _mulcatbool_ variants, respectively. In particular, both these causality-driven variant results were obtained with _Lehmer_ causality setting, using a _Lehmer_ parameter of -\(100\). Similar behaviour, with more pronounced improvement, was observed for the \(4\)-way experiment (i.e., ISUP \(2-5\)), where we obtained \(0.585\) (\(0.068\)) multi-class AUROC for the non-causality-driven model, whereas the _mulcat_ and _mulcatbool_ variants achieved \(0.611\) (\(0.069\)) and \(0.614\) (\(0.067\)), respectively. Again, both best-performing mulcat and mulcatbool variants were obtained employing the _Lehmer_ setting, with _Lehmer_ parameters equal to \(1\) and -\(2\), respectively.
Table 2 also shows the results of an ablation study. Indeed, since in the causally-driven implementations is the _causality factors_ vector that ultimately determines which (and how) feature maps are enhanced, we modify that vector to weigh features in a random manner rather than based on a principled way based on the causality map. The ablation variants of the _mulcat_ and _mulcatbool_ models that we realized are:
* **ablation mulcat**. The \(1\times k\) vector of causality factors (i.e., weights) is replaced with a random vector of the same size with integer values ranging from \(0\) (a feature map is never _cause_ of another feature) and \(k-1\) (it is _cause_ of every other feature).
* **ablation mulcatbool**. Similar to the previous, the values of the weights are randomly assigned either to \(0\) or to \(1\).
### Visualizing activation maps
As a result of the post hoc explainability evaluations, we obtain the visualizations shown in Figure 2. The first row (\(a\) to \(e\)) regards a test case from models trained in the \(2\)-way setting, while the second row (\(f\) to \(j\)) pertains to a case from \(4\)-way models. From left to right, the columns represent the input test image, the annotation of the lesion in terms of binary masks, the Grad-CAM activation for the baseline models (not causality-driven), the Grad-CAM activation for the _mulcat_ models, and the Grad-CAM activation for the _mulcatbool_ models.
## 6 Discussion and Conclusion
In this study, we investigated the impact of integrating a new causality-extraction module into traditional CNNs to enhance classification performance. We trained this causality-driven model using an OSL approach, leveraging meta-learning conditions with the MetaDeepBDC model [19]. We aimed to assess the effectiveness of such a model in situations where only a few samples are available for training, a challenge frequently encountered in medical image analysis.
In Pearl's terms, our work regards the first rung of the ladder of causation, where reasoning is limited to conditional probabilities based on observational datasets. However, we aimed to explore whether this approach could yield improvements in a scenario involving image data (rather than structured tabular data), and no prior knowledge of the data generation process. Our findings demonstrate that incorporating a causality-driven module into our model leads to enhanced performance compared to the baseline. This behavior is evident in both the \(2\)-way and \(4\)-way experiments. In particular, in the \(4\)-way experiment, the causality module provided a 3% improvement over the baseline in terms of the multi-class AUROC and about \(13\)% improvement in terms of ISUP \(2\) vs. rest AUROC.
We additionally validated our numerical results both quantitatively and qualitatively. Quantitatively, we performed ablation studies on the actual impact of the causality factors on producing valuable causality-driven feature maps. As expected, when the causal weights are replaced with random vectors, the accuracy of the final model is worse than its causally-driven counterpart (see Table 2). This seems to suggest that, albeit weak, the causality signals learned during training help the network. Qualitatively, we generated Grad-CAM heatmaps to highlight the regions of the input image that strongly influence the network's output. Figure 2 presents examples for both the \(2\)-way and the \(4\)-way experiments. In both cases, we calculated the heatmaps w.r.t., the ground truth target when all three types of models correctly classified the images. The heatmaps reveal distinct patterns between the baseline model and the causality-driven models. The former tends to focus on a larger area of the image, including regions outside the prostate and lesion boundaries. Conversely, the causality-driven models concentrate on smaller areas, predominantly encompassing the prostate and the lesion. Figure 2 (c-e), which depicts the \(2\)-way experiment, shows that the baseline model (Figure 2 c) primarily attends to the left half of the image, encompassing the lesion as well as non-prostate tissues. In contrast, the _mulcat_ version (Figure 2 d) exhibits a more focused heatmap, highlighting mainly the prostate and a portion of the lesion. The _mulcatbool_ case (Figure 2 e) further refines the focus by emphasizing the prostate and a larger portion of the lesion. Similarly, as for the \(4\)-way experiment, the baseline model (Figure 2 h) pays attention to the left half of the image. In contrast, the _mulcat_ and _mulcatbool_ versions (Figure 2 i-j) prioritize the lesion's immediate surroundings. Although all three models produce accurate predictions, the heatmaps demonstrate that the causality-driven module enables better localization of relevant regions, providing more
reliable explanations for the model's predictions.
Comparing the \(4\)-way experiment to the \(2\)-way experiment, the former produced better classification results. Indeed, despite being a more complex task, the mean AUROC across all classes is higher. We argue that two factors contribute to this outcome, both associated with the meta-training phase. Firstly, in the \(4\)-way experiment, the model encounters a more diverse range of tasks, as the four classes can be selected from a pool of eight distinct options. In contrast, the \(2\)-way experiment encompasses only four classes in total. Secondly, for the \(4\)-way experiment, in each meta-training task, the model is trained to distinguish a higher number of classes, representing a more challenging task w.r.t. the binary case. Consequently, the model is better equipped to handle the testing phase, resulting in improved performance. This superiority becomes even more evident when examining the models' performance in the \(4\)-way experiment in classifying ISUP class \(2\) (representing LG lesions) against all other classes (representing HG lesions). Notably, when the _mulcat_ and _mulcatbool_ causality-driven modules are embedded into the model, the AUROC value for this particular task increases by almost \(16\)%.
Our work comes with several limitations. We only used ResNet18 as our backbone network, and this might have limited the opportunity to find better-suited architectures able to detect finer details in the image and consequently extract more informative latent representations. In addition, we performed our experiments only in an OSL setting, limiting the classification performance of our models. In fact, we must note that the performance values obtained are not yet sufficient for clinical practice. Finally, we validated our method on only one dataset.
Despite that, our findings indicate that integrating a causality-driven module into a classification model can enhance performance, even with severe data limitations, which are common in medical imaging. The causality-driven approach not only improves overall classification results but also helps the model focus more accurately on the critical regions of the image, leading to more reliable and robust predictions. This aspect is particularly critical in medical imaging, where precise and reliable classification is crucial for effective diagnosis and treatment planning.
|
2301.13368 | Misspecification-robust Sequential Neural Likelihood for
Simulation-based Inference | Simulation-based inference techniques are indispensable for parameter
estimation of mechanistic and simulable models with intractable likelihoods.
While traditional statistical approaches like approximate Bayesian computation
and Bayesian synthetic likelihood have been studied under well-specified and
misspecified settings, they often suffer from inefficiencies due to wasted
model simulations. Neural approaches, such as sequential neural likelihood
(SNL) avoid this wastage by utilising all model simulations to train a neural
surrogate for the likelihood function. However, the performance of SNL under
model misspecification is unreliable and can result in overconfident posteriors
centred around an inaccurate parameter estimate. In this paper, we propose a
novel SNL method, which through the incorporation of additional adjustment
parameters, is robust to model misspecification and capable of identifying
features of the data that the model is not able to recover. We demonstrate the
efficacy of our approach through several illustrative examples, where our
method gives more accurate point estimates and uncertainty quantification than
SNL. | Ryan P. Kelly, David J. Nott, David T. Frazier, David J. Warne, Chris Drovandi | 2023-01-31T02:28:18Z | http://arxiv.org/abs/2301.13368v2 | # Misspecification-robust Sequential Neural Likelihood
###### Abstract
Simulation-based inference (SBI) techniques are now an essential tool for the parameter estimation of mechanistic and simulatable models with intractable likelihoods. Statistical approaches to SBI such as approximate Bayesian computation and Bayesian synthetic likelihood have been well studied in the well specified and misspecified settings. However, most implementations are inefficient in that many model simulations are wasted. Neural approaches such as sequential neural likelihood (SNL) have been developed that exploit all model simulations to build a surrogate of the likelihood function. However, SNL approaches have been shown to perform poorly under model misspecification. In this paper, we develop a new method for SNL that is robust to model misspecification and can identify areas where the model is deficient. We demonstrate the usefulness of the new approach on several illustrative examples.
_Keywords: generative models, implicit models, likelihood-free inference, normalising flows, simulation-based inference_
## 1 Introduction
Statistical inference for complex models can be challenging when the likelihood function is infeasible to evaluate many times. However, if the model is computationally inexpensive to simulate given parameter values, it is possible to perform approximate parameter estimation by so-called simulation-based inference (SBI) techniques (e.g. Cranmer et al. (2020)). The difficulty of obtaining reliable inferences in the SBI setting is exacerbated when the model is misspecified (e.g. Frazier et al. (2020)).
Statistical approaches for SBI, such as approximate Bayesian computation (ABC, Sisson et al. (2018)) and Bayesian synthetic likelihood (BSL, Price et al. (2018)) have been well studied, both empirically (e.g. Drovandi and Frazier (2022)) and theoretically (e.g. Li and Fearnhead (2018), Frazier et al. (2018), Frazier et al. (2022)). These approaches often base inference on a summarisation of the data to manage computational costs. ABC aims to minimise the distance between observed and simulated summaries, whereas BSL constructs a Gaussian approximation of the model summary to form an approximate likelihood. In the case of model misspecification, there may be additional motivation to replace the entire dataset with summaries, as the resulting model can then be trained to capture the broad features of the data that may be of most interest; see, e.g., Lewis et al. (2021) for further discussion. In this paper, the type of misspecification we are interested in is when the model is not able to recover the observed summary statistic as the sample size diverges. This form of misspecification is referred to as incompatibility in Marin et al. (2014).
The behaviour of ABC and BSL under incompatibility is now well understood. Frazier et al. (2020) show that under various assumptions, ABC is capable of concentrating onto the pseudo-true parameter value, which in the SBI context is the value that minimises some distance between the large sample limit of the observed and simulated summaries. However, the concentration is not Gaussian and credible intervals do not have the correct frequentist coverage. BSL on the other hand can exhibit unexpected behaviour under misspecification (Frazier et al., 2021). For example, it is possible to obtain Gaussian concentration onto the pseudo-true parameter, but it is also possible to obtain a multimodal posterior that does not concentrate onto a singleton. Unfortunately, the behaviour for a given problem is not known _a priori_.
Given the undesirable properties of BSL under misspecification, Frazier and Drovandi (2021) propose methods to simultaneously identify which statistics are incompatible and make inferences robust. The approach of Frazier and Drovandi (2021) is a model expansion that introduces auxiliary variables, one per summary statistic, whose purpose is to either shift the means or inflate the variances in the Gaussian approximation so that the extended model is compatible, i.e. to soak up the misspecification.
Although ABC is, in a certain sense, robust to misspecification, and BSL has been extended to handle incompatibility, they both remain inefficient in terms of the number of model simulations required. Most algorithms for ABC and BSL are wasteful in the sense they use a relatively large number of model simulations that are associated with rejected parameter proposals (for some exceptions to this, see Jasra et al. (2019); Levi and Craiu (2022); Warne et al. (2018, 2022)). This has motivated the development of methods in machine learning that utilise all model simulations to learn either the likelihood (e.g. Papamakarios et al. (2019)), posterior (e.g. Greenberg et al. (2019)) or likelihood ratio (e.g. Thomas et al. (2022)). Since these objects are learned as functions of the parameter, subsequent posterior inference does not require further model simulation.
However, the machine learning approaches, such as sequential neural likelihood (SNL) and sequential neural posterior (SNP) have been shown to exhibit poor performance under model misspecification (e.g. Bon et al. (2022); Cannon et al. (2022); Schmitt et al. (2021); Ward et al. (2022)). Thus there is a critical need to develop these neural approaches so they are robust to model misspecification. Ward et al. (2022) develop a method, which shares similarities to the mean adjustment approach developed for BSL, to make neural posterior estimation robust to model misspecification. Cannon et al. (2022) develop several neural SBI robust methods by incorporating machine learning methods that are known to better handle out-of-distribution (OOD) data. Cranmer et al. (2020) advise to incorporate
additional noise directly into the simulator if model misspecification is suspected.
In this paper we develop a robust version of SNL, again inspired by the mean adjustment approach for BSL. Unlike Ward et al. (2022) who consider neural posterior estimation, we consider neural likelihood estimation, which is useful for problems where the likelihood is easier to emulate compared to the posterior. Further, ours is the first _sequential_ neural approach that simultaneously detects and corrects for model misspecification.
## 2 Background
Let \(y=(y_{1},\ldots,y_{n})^{\top}\) denote the observed data and define \(P_{0}^{(n)}\) as the true distribution of \(y\). The observed data is assumed to be generated from a class of parametric models \(\{P_{\theta}^{(n)}:\theta\in\Theta\subset\mathbb{R}^{d_{\theta}}\}\) for which the likelihood function is intractable, but from which we can easily simulate pseudo-data \(x\) for any \(\theta\in\Theta\) where \(\theta\) is \(d_{\theta}\) dimensional. Let \(\Pi\) denote the prior measure for \(\theta\) and \(\pi(\theta)\) its density. The posterior density of interest is given by
\[\pi(\theta\mid y)\propto g(y\mid\theta)\pi(\theta),\]
where \(g(y\mid\theta)\) is the likelihood function.
### Statistical Approaches for SBI
Since we assume that the likelihood is computationally intractable, we conduct inference using approximate Bayesian methods. Statistical approaches to SBI aim to search for values of \(\theta\) that produce pseudo-data \(x\) which is "close enough" to \(y\), and then retain these values to build an approximation to the posterior. To ensure the problem is computationally practical, the comparison is generally carried out using summaries of the data. Moreover, under model misspecification, there may be further motivation to conduct inference based on summaries, to attempt to capture the key features of the data. Let \(S:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\), \(d\geq d_{\theta}\), denote the vector summary statistic mapping used in the analysis.
Two prominent statistical approaches for SBI are ABC and BSL. ABC approximates the likelihood via the following:
\[g_{\epsilon}(S(y)\mid\theta)=\int_{\mathbb{R}^{d}}K_{\epsilon}(\rho\{S(y),S(x )\})g_{n}(S(x)\mid\theta)dx,\]
where \(\rho\{S(y),S(x)\}\) measures the discrepancy between observed and simulated summaries and \(K_{\epsilon}(\cdot)\) is a kernel that allocates higher weight to smaller \(\rho\). The bandwidth of the kernel, \(\epsilon\), is often referred to as the tolerance in the ABC literature. The above integral is intractable, but can be estimated unbiasedly by drawing \(m\) mock datasets \(x_{1},\ldots,x_{m}\sim P_{\theta}^{(n)}\) and computing
\[\hat{g}_{\epsilon}(S(y)\mid\theta)=\frac{1}{m}\sum_{i=1}^{m}K_{\epsilon}(\rho \{S(y),S(x_{i})\}).\]
It is common to set \(m=1\) and choose the indicator kernel function, \(K_{\epsilon}(\rho\{S(y),S(x)\})=\mathbf{I}(\rho\{S(y),S(x)\}\leq\epsilon)\). Using arguments from the exact-approximate literature (Andrieu and Roberts, 2009), unbiasedly estimating the ABC likelihood leads to a Bayesian algorithm that samples from the approximate posterior proportional to \(g_{\epsilon}(S(y)\mid\theta)\pi(\theta)\).
As is evident from the above integral estimator, ABC non-parametrically estimates the summary statistic likelihood. In contrast, BSL uses a parametric estimator. The most common BSL approach approximates \(g_{n}(\cdot\mid\theta)\) using a Gaussian:
\[g_{A}(S(y)\mid\theta)=\mathcal{N}\left(S(y);\mu(\theta),\Sigma(\theta)\right),\]
where \(\mu(\theta)=\mathsf{E}[S(x)|\theta]\) and \(\Sigma(\theta)=\mathrm{Var}(S(x)|\theta)\) denote the mean and variance of the model summary statistic at \(\theta\). In almost all practical cases \(\mu(\theta)\) and \(\Sigma(\theta)\) are unknown, but we can replace these quantities with those estimated from \(m\) independent model simulations, using for example the sample mean and variance:
\[\mu_{m}(\theta) =\frac{1}{m}\sum_{i=1}^{m}S(x^{i}),\] \[\Sigma_{m}(\theta) =\frac{1}{m}\sum_{i=1}^{m}\left(S(x^{i})-\mu_{m}(\theta)\right) \left(S(x^{i})-\mu_{m}(\theta)\right)^{\top},\]
and where each simulated data set \(x^{i}\), \(i=1,\ldots,m\), is generated iid from \(P_{\theta}^{(n)}\). The synthetic likelihood is then approximated as
\[\hat{g}_{A}(S(y)\mid\theta)=\mathcal{N}\left(S(y);\mu_{m}(\theta),\Sigma_{m}( \theta)\right).\]
Unlike ABC, \(\hat{g}_{A}(S(y)\mid\theta)\) is not an unbiased estimator of \(g_{A}(S(y)\mid\theta)\). Frazier et al. (2022) demonstrate that if the summary statistics are sub-Gaussian, then the choice of \(m\) is immaterial so long as \(m\) diverges as \(n\) diverges. The insensitivity to \(m\) is supported empirically in Price et al. (2018), provided that \(m\) is chosen large enough so that the plug-in synthetic likelihood estimator has a small enough variance to ensure that MCMC mixing is not adversely affected.
### SBI and Model Misspecification
The usual notion of model misspecification is not meaningful in the SBI context, i.e., no value of \(\theta\in\Theta\) such that \(P_{\theta}^{(n)}=P_{0}^{(n)}\), since even if the model is incorrect, it is still possible that \(P_{\theta}^{(n)}\) can generate summary statistics that match the observed statistic (Frazier et al., 2020). Define \(b(\theta)=\mathsf{E}[S(x)\mid\theta]\) and \(b_{0}=\mathsf{E}[S(y)]\) as the expected value of the summary statistic with respect to the probability measures \(P_{\theta}^{(n)}\) and \(P_{0}^{(n)}\), respectively. That is, the expectations are with respect to the model conditioned on \(\theta\) and the true data generating process, respectively. The meaningful notion of misspecification in the SBI context is when there is no \(\theta\in\Theta\) such that \(b(\theta)=b_{0}\), i.e. there is no parameter value such that the expected simulated and observed summaries match.
In the context of ABC, we say that the model is misspecified if
\[\epsilon^{*}=\inf_{\theta\in\Theta}\rho(b(\theta),b_{0})>0,\]
for some metric \(\rho\), and the corresponding pseudo-true parameter is defined as \(\theta^{*}=\arg\inf_{\theta\in\Theta}\rho(b(\theta),b_{0})\). Frazier et al. (2020) show, under various conditions, the ABC posterior concentrates onto \(\theta^{*}\) for large sample sizes, and thus ABC does possess an inherent robustness to model misspecification. However, Frazier et al. (2020) also show that the asymptotic shape of the ABC posterior is non-Gaussian and credible intervals do not pos
sess valid frequentist coverage; i.e., confidence sets do not have the correct level under \(P_{0}^{(n)}\).
In the context of BSL, Frazier et al. (2021) show that when the model is incompatible, i.e. \(b(\theta)\neq b_{0}\ \forall\theta\in\Theta\), the Kullback-Leibler divergence between the true data generating distribution and the Gaussian distribution associated with the synthetic likelihood diverges as \(n\) diverges. In BSL, we say that the model is incompatible if
\[\lim_{n\to\infty}\inf_{\theta\in\Theta}\left\{b(\theta)-b_{0}\right\}^{\top} \left\{n\Sigma(\theta)\right\}^{-1}\left\{b(\theta)-b_{0}\right\}>0.\]
Define \(M_{n}(\theta)=n^{-1}\partial\log g_{A}\left(S\mid\theta\right)/\partial\theta\). The behaviour of BSL under misspecification is dependent on the number of roots of \(M_{n}(\theta)=0\). If there is a single solution, and under various assumptions, the BSL posterior will concentrate onto the pseudo-true parameter \(\theta^{*}\) and its asymptotic shape is Gaussian, and the BSL posterior mean satisfies a Bernstein von-Mises result. However, if there are multiple solutions to \(M_{n}(\theta)=0\), then the BSL posterior will asymptotically exhibit multiple modes that do not concentrate on \(\theta^{*}\). The number of solutions to \(M_{n}(\theta)=0\) for a given problem is not known _a priori_ and is very difficult to explore.
In addition to the theoretical issues suffered by BSL under misspecification, there is also computational issues. Frazier and Drovandi (2021) identify that, under incompatibility, since the observed summary lies in the tail of the estimated synthetic likelihood for any value of \(\theta\), the Monte Carlo estimate of the likelihood suffers from high variance. Consequently, a very large value of \(m\) is required to allow the MCMC chain to mix and not become stuck, which is computationally burdensome.
A solution to the BSL incompatibility problem is provided in Frazier and Drovandi (2021). The solution involves expanding the model to include an auxiliary parameter, \(\Gamma\in\mathbb{R}^{d}\) such that \(\Gamma=(\gamma_{1},\ldots,\gamma_{d})^{\top}\), which has the same dimension as the summary statistic. The approach of Frazier and Drovandi (2021) then either adjusts the mean or inflates the variance of the synthetic likelihood so that the observed summary does not lie so far in the tails of the expanded model. The expanded model is overparameterised since \(\dim((\theta,\Gamma)^{\top})=d+d_{\theta}\), which is greater than the dimension of the summary statistic, \(d\). To regularise the model, Frazier and Drovandi (2021) impose a prior distribution on \(\Gamma\) that favours compatibility. However, the prior for each component of \(\Gamma\) has a heavy tail so that it can "soak up" the misspecification for a certain subset of the summary statistics. By doing so, the method is able to identify the statistics that the model is not compatible with, and at the same time, mitigate the influence of the incompatible statistics on the inference. Frazier and Drovandi (2021) show that under compatibility, the posterior for \(\Gamma\) is the same as its prior, so that incompatibility can be detected by departures from the prior.
Here we provide more detail on the mean adjustment method of Frazier and Drovandi (2021) since we adopt a similar approach within our robust SNL method. The mean adjusted (estimated) synthetic likelihood is denoted
\[\mathcal{N}\left(S;\mu_{m}(\theta)+\sigma_{m}(\theta)\circ\Gamma,\Sigma_{m}( \theta)\right),\]
where \(\sigma_{m}(\theta)\) is the vector of estimated standard deviations of the model summary statistics, and \(\circ\) denotes the Hadamard (element-by-element) product. The role of \(\sigma_{m}(\theta)\) is to ensure that we can treat each component of \(\Gamma\) as the number of standard deviations (either positive or negative) that we are shifting the corresponding model summary statistic.
Frazier and Drovandi (2021) suggest using a prior for which \(\theta\) and \(\Gamma\) are independent, with the prior density for \(\Gamma\) being
\[p(\Gamma)=\prod_{j=1}^{d}\frac{1}{2\lambda}\exp\left(-\frac{|\gamma_{j}|}{ \lambda}\right).\]
The Laplace prior above with scale \(\lambda\) for each \(\gamma_{j}\) is chosen because it is peaked at zero, but with a moderately heavy tail. Frazier and Drovandi (2021) develop a component-wise MCMC algorithm that iteratively updates via the conditionals \(\theta|S,\Gamma\) and \(\Gamma|S,\theta\). The update for \(\Gamma\) holds the \(m\) model simulations fixed and uses a slice sampler so that the acceptance rate is one and does not requiring tuning a proposal distribution. Frazier and Drovandi (2021) find empirically that sampling over the joint space \((\theta,\Gamma)^{\top}\) does not slow down mixing on the \(\theta\)-marginal space. On the contrary, in the case of misspecification, the mixing is substantially improved as the observed value of the summaries no longer falls in the tail of the Gaussian distribution.
Although ABC has a natural robustness to misspecification and BSL has been extended to accommodate incompatibility, both methods reject a large number of model simulations, and can thus be highly computationally intensive when simulating the model is not cheap. As described in the introduction, neural methods in the machine learning community have been developed that exploit all the model simulations to build a surrogate model of the posterior, likelihood or likelihood ratio. Below we describe one of these methods, sequential neural likelihood (SNL), and show how it can be extended to accommodate model misspecification.
## 3 Robust Sequential Neural Likelihood
In this section, we propose an approach that extends SNL using a similar method to the mean adjustment approach in Frazier and Drovandi (2021) so that it is robust to model misspecification.
### Sequential Neural Likelihood
SNL belongs to the class of SBI methods that use a neural conditional density estimator (NCDE). A NCDE is a specific class of neural network, \(q_{\phi}\), parameterised by \(\phi\), that learns a conditional probability density from a set of datapoint pairs. This is attractive for SBI as we have access to pairs of \((\theta,x)\), but do not have a tractable conditional probability density, in either direction. Hence, the idea is to train \(q_{\phi}\) on \(\mathcal{D}=\{\theta_{i},x_{i}\}_{i=1}^{m}\) and use it as a surrogate for the unavailable density of interest. NCDEs have been used as a surrogate density for the likelihood (Papamakarios et al., 2019) and posterior (Papamakarios and Murray, 2016; Greenberg et al., 2019). Throughout this section we will mainly consider approaches that build a surrogate of the intractable likelihood function, \(q_{\phi}(S(x)\mid\theta)\), using a normalising flow as the NCDE.
Normalising flows are a useful class of neural networks for density estimation. They convert a simple base distribution with density \(\pi(u)\), to a complex target distribution with density \(\pi(\eta)\), through a sequence of \(L\) bijective transformations, \(T=T_{L}\circ\cdots\circ T_{1}\). The density of
\(\eta=T^{-1}(u)\), \(\eta\in\mathbb{R}^{d}\), where \(u\sim\pi(u)\) is
\[\pi(\eta)=\pi(u)|\det J_{T}(u)|^{-1}, \tag{1}\]
where \(J_{T}\) is the Jacobian of \(T\). Normalising flows are also useful for data generation, although this has been less important for SBI methods. We only consider autoregressive flows here, but there are many recently developed alternatives, as discussed in Papamakarios et al. (2021).
Autoregressive flows are defined by a conditioner function and a transformer function. The transformer, \(v^{\prime}_{i}=\tau(v_{i};h_{i})\), is an invertible function parameterised by \(h_{i}\) that maps \(v_{i}\) to \(v^{\prime}_{i}\) for \(i=1,\ldots,d\). The conditioner, \(h_{i}=c_{i}(v_{<i})\), outputs values that parameterise the transformer function. The only constraint for the conditioner is the autoregressive property (the \(i\)-th element of the conditioner can only be conditioned on elements from \(v\in\mathbb{R}^{d}\) that have indices \(<i\)). This constraint ensures that the Jacobian is a triangular matrix allowing fast computation of the determinant in Equation 1. The sequence of transformations, \(T\), is composed of the transformer and conditioner functions repeated multiple times, with the output of the transformer, \(v^{\prime}_{i}\), being passed into the next conditioner function. Autoregressive flows have found popular usage in SBI applications.
The two flows most widely used for SBI are masked autoregressive flow (MAF, Papamakarios et al., 2017) and neural spline flow (NSF, Durkan et al., 2019). We consider NSF in more depth as it is the flow used for the examples in Section 4. NSF uses a spline-based transformer that defines a monotonically increasing piecewise function of \(K\) bins between \(K+1\) knots. Due to its expressive power, a rational quadratic function (quotient of two quadratic polynomials) is used for each bin. The conditioner output parameters \(h_{i}\) are the knot locations and the derivatives at the knots.
The conditioner is implemented in NSF as a coupling layer. A coupling layer splits the data into two parts. The first part, \((z_{1},\ldots,z_{\lfloor\frac{d}{2}\rfloor})\), is left untouched. The second part takes the unchanged first part as input, and outputs \((h_{\lfloor\frac{d}{2}\rfloor+1},\ldots,h_{d})\) using some function (typically a neural network). Finally, to make NSF a conditional normalising flow, we add \(\theta\) into the conditioner, \(h_{i}=c_{i}(v_{<i}\mid\theta)\). As the composition of \(T\) contains many neural networks, stochastic gradient-based optimisation is used to train the flow. The trained flow can then be embedded in an MCMC sampling scheme to sample from the approximate posterior.
Neural-based methods can efficiently sample the approximate posterior using MCMC methods. The evaluation of the normalising flow density is constructed to be fast. Also as we are using the trained flow as a surrogate function, no simulations are needed during MCMC sampling. Using automatic differentiation (Baydin et al., 2018), one can efficiently find the gradient of a NCDE and use it in an efficient MCMC sampler such as the No-U-Turn sampler (NUTS) (Hoffman and Gelman, 2014).
One categorisation of neural SBI methods is between amortised and sequential sampling schemes. These methods differ in the proposal distribution for \(\theta\). Amortised methods build a surrogate of the likelihood function \(q_{\phi}(S(x)\mid\theta)\) for any \(x\) within the support of the prior predictive distribution. Thus the trained flow can be used to approximate the posterior for any observed statistic, which is efficient if many datasets need to be analysed. Unfortunately, this requires using the prior as the proposal distribution. When the prior and posterior differ, there will be few training samples of \(x\) that are close to \(y\), and hence the trained flow may not be very accurate in the vicinity of the observed statistic.
Sequential approaches aim to update the proposal distribution, so that more training datasets are generated closer to \(S(y)\) to obtain a more accurate approximation of \(\pi(\theta|S(y))\). In this approach, \(R\) rounds of training is performed, with the proposal distribution for the current round given by the approximate posterior for the previous round. The first round proposes \(\theta\sim\pi(\theta)\). At each round \(r\), a normalising flow, \(q_{r,\phi}(S(x)\mid\theta)\) is trained on all generated \((\theta,x)\in\mathcal{D}\).
Development of neural methods for SBI is an active area of research with many recent approaches also approximating the likelihood (Boelts et al., 2022; Wiqvist et al., 2021). Neural SBI methods need not use normalising flows, with some more recent approaches using diffusion models to approximate the score of the likelihood (Sharrock et al., 2022) or energy-based models to surrogate the likelihood (Glaser et al., 2022) or score (Pacchiardi and Dutta, 2022). Our robust extension of SNL can in principle be implemented with any of these likelihood estimators.
### Robust Extension to Sequential Neural Likelihood
Recent research has found neural SBI methods behave poorly under model misspecification (Bon et al., 2022; Cannon et al., 2022; Schmitt et al., 2021; Ward et al., 2022). It is not surprising that neural SBI methods suffer from the same issues as ABC and BSL when compatibility is not satisfied as they are based on many of the same principles. Indeed, neural methods are known to struggle when the input differs from the training dataset, an issue known as out-of-distribution (OOD, Yang et al., 2021). This extends to normalising flows which been shown to fail to detect OOD data (Kirichenko et al., 2020). The poor performance of neural SBI under model misspecification has prompted the development of more robust methods.
Recently, methods have been developed to detect model misspecification when applying neural posterior estimation for both amortised (Ward et al., 2022) and sequential (Schmitt et al., 2021) approaches.1 Schmitt et al. (2021) use a maximum mean discrepancy (MMD) estimator to detect a "simulation gap" between the observed and simulated data. However, this is focused on detecting model misspecification and does not add robustness to inferences on \(\theta\). Ward et al. (2022) both detects and corrects for model misspecification similarly to Frazier and Drovandi (2021). Rather than explicitly introducing auxiliary variables, Ward et al. (2022) introduces an error model \(\pi(S(y)\mid S(x))\). The error model can be used to sample values \(S_{i}(x)\), \(i=1,\ldots,m\) for \(S(x)\) from its marginal posterior density, which is approximated by the density proportional to \(\pi(S(y)\mid S(x))q_{\phi}(S(x))\), where \(q_{\phi}(S(x))\) is a normalising flow approximating the prior predictive density of \(S(x)\). The marginal posterior for \(\theta\) is then approximated as an average of conditional density estimates \(q_{\phi}(\theta\mid S_{i}(x))\), for \(i=1,\ldots,m\), using a second conditional normalizing flow for estimating the conditional posterior of \(\theta\) given \(S(x)\). Both of the approaches described above use a surrogate of the posterior. There is thus a gap in the literature for robust neural methods that approximate the likelihood, which would be beneficial for applications where it is easier to emulate the likelihood than the posterior.
Footnote 1: Schmitt et al. (2021) also add robustness model misspecification to BayesFlow (Radev et al., 2022). BayesFlow, like NPE, is an amortised neural approximation of the posterior.
We propose robust SNL (RSNL), a sequential approach that approximates the likelihood that is made robust to model misspecification using a similar approach to Frazier and Drovandi (2021). As outlined in Section 2.2, the approach of Frazier and Drovandi (2021) adjusts either the sample mean or sample covariance. In the case of the mean adjustment,
we can think of the adjustment being applied to the observed summary rather than the estimated summary mean, given the symmetry of the normal distribution. For RSNL, we apply this argument to shift the observed summary directly based on auxiliary adjustment parameters. When \(S(y)\) falls in the tail of the surrogate likelihood, the adjustment parameters can be activated to shift to a region of higher density. We thus evaluate \(q_{\phi}(S(y)-\Gamma\mid\theta)\) as the adjusted surrogate likelihood.2 So instead of targeting \(\pi(\theta\mid S(y))\), we are now estimating the approximate joint posterior,
Footnote 2: We could use notation \(q_{\phi}(S(y)\mid\theta,\Gamma)\). However, \(q_{\phi}(S(y)-\Gamma\mid\theta)\) highlights that we are using the flow trained on \(\mathcal{D}\), and the effect of \(\Gamma\) is solely shifting the location of the observed summaries.
\[\pi(\theta,\Gamma\mid S(y))\propto q_{\phi}(S(y)-\Gamma\mid\theta)\pi(\theta) \pi(\Gamma),\]
where we set \(\pi(\theta)\) and \(\pi(\Gamma)\) independently of each other.
We find that the prior choice, \(\pi(\Gamma)\), is crucial for RSNL. As in the mean adjustment approach of Frazier and Drovandi (2021), also known as robust BSL (RBSL), we impose a Laplace prior distribution on \(\Gamma\) to encourage shrinkage. We set the components of \(\Gamma\) to be independent, \(\pi(\Gamma)=\Pi_{i=1}^{d}\pi(\gamma_{i})\). We could follow Frazier and Drovandi (2021) and set each component to the same prior scale. However, we propose here to set the prior for each component to,
\[\pi(\gamma_{i})=\text{Laplace}(0,\lambda=0.3\times\overset{\sim}{S}(y))=\frac{ 1}{2\lambda}\exp{\left(-\frac{|\gamma_{i}|}{\lambda}\right)},\]
where \(\overset{\sim}{S}(y)\) is the standardised observed summary (we discuss later more details on the standardisation). We set \(\pi_{0}(\gamma_{i})\sim\text{Laplace}(0,1)\) for the initial round. We recompute \(\overset{\sim}{S}(y)\) at each round and accordingly set \(\pi_{r}(\gamma_{i})\). The idea here is that the standardised observed statistic gives us information on how likely a summary is to be misspecified (i.e. the further in the tails, the more likely it is to be misspecified). This approach allows highly misspecified summaries to be corrected as well as reducing the noise introduced by the adjustment parameters when the summary is well-specified. A consequence of this is that regardless of how far an incompatible statistic is in the tail, the adjustment parameters will have enough density to (theoretically) map the misspecified summary to the mean of the simulated summaries.
To our knowledge, there are two main scenarios where this prior is not suitable. First, if a summary is incompatible but after standardisation is very close to 0. This seems unlikely but may be possible when the simulated summaries have a complex multi-modal distribution. In this case, RSNL will behave similarly to SNL for the particular summary. Second, a summary is correctly specified but is in the tails. This is again unlikely, and would have the effect of increasing the noise introduced by the adjustment parameters. If there is a concern, the researcher can inspect summary statistic plots or the posterior predictive and use a different prior. However, we find that our choice of prior works well for the examples in Section 4.
The summaries are standardised to account for varying scales. This is done after the additional simulations are generated at each training round. As all generated parameters are used to train the flow, standardisation is computed using all of \(\mathcal{D}\). Standardisation serves two purposes: 1) when training the flow and 2) for the adjustment parameters to be on roughly the same scale as the summaries. When adjusting the summaries, we note the standardisation has been done unconditionally (i.e. sample mean and sample
standard deviation have been calculated using all simulated summary statistics in the training set). Standardisation conditional on \(\theta\) may be needed for more heteroskedastic simulation functions. We discuss some possible extensions in Section 5.
We are targeting the augmented joint posterior for \(\theta\) and \(\Gamma\). Algorithm 1 shows the full process to sample the RSNL approximate posterior. RSNL, like SNL, can evaluate both the neural likelihood and the gradient of the approximate posterior efficiently, so we use NUTS for MCMC sampling. This differs from Ward et al. (2022) who, due to the use of a spike-and-slab prior, use mixed Hamiltonian Monte Carlo, an MCMC algorithm for inference on both continuous and discrete variables. The main difference between SNL and RSNL is that the MCMC sampling is now targeting the adjusted posterior. Hence, RSNL can be used in place of SNL with little difficulty.
Once we have samples from the joint posterior, we can consider the \(\theta\) and \(\Gamma\) posterior samples separately. We can use the \(\theta\) samples to conduct Bayesian inference on functions of \(\theta\) of interest for the application at hand. Additionally, the \(\Gamma\) approximate posterior samples can be used for model criticism.
RSNL can be used for model criticism similarly to RBSL and the ABC approach of Ratmann et al. (2009). It is expected that when the assumed and actual DGP are incompatible, RSNL will behave similarly to RBSL and there will be a discrepancy between the prior and posterior distributions for the components of \(\Gamma\). Visual inspection should be sufficient to detect a discrepancy. However, a researcher can use any statistical distance function to assess this.
Another common approach for model criticism is posterior predictive checks, as was recommended for RBSL in Frazier and Drovandi (2021). For RSNL, we can also use the posterior predictive, \(\pi(S(\tilde{y})\mid S(y))\), where \(S(\tilde{y})\) is generated at sampled parameters from the approximate posterior, to visually assess model incompatibility. If \(S(y)\) appears in the tails with little to no support, then this could be evidence of model misspecification. Additionally, the usual diagnostics for neural SBI methods are also available for RSNL. This is advantageous not only for detecting model misspecification, but also for making inference robust to misspecification.
## 4 Examples
In this section, we apply SNL and RSNL on three illustrative examples with model misspecification. Across all examples the following design and hyperparameters are used unless otherwise specified. We use a conditional NSF for \(q_{\phi}(S(x)\mid\theta)\) as implemented in the flowjax package (Ward, 2023). The flow design closely follows the choices in the sbi package (Tejero-Cantero et al., 2020). For the rational quadratic spline transformer, we use 10 bins over the interval [-5, 5]. The transformer function defaults to the identity function outside of this range. This is important for the considered misspecified models, as often the observed summary is in the tails. The conditioner consists of five coupling layers, with each coupling layer using a multilayer perceptron of two layers with 50 hidden units. The flow is trained using the Adam optimiser (Kingma and Ba, 2015) with a learning rate of \(5\times 10^{-4}\). Training of the flow is stopped when either the validation loss, calculated on 10% of the samples, has not improved over 20 epochs or when the limit of 500 epochs is reached.
We parallelise NUTS (MCMC) sampling across four chains and set the target acceptance probability to 0.95.3 Chain convergence is assessed by checking that the rank normalised \(\hat{R}\) of Vehtari et al. (2021) is in the range (1.0, 1.05) and the effective sample size (ESS) is reasonably close to the number of MCMC iterations. For each example, the autocorrelation, ESS and trace plots are also inspected. The chains are initialised at a random sample from the previous round. We then run each chain for 3500 iterations and discard the first 1000 iterations for burn-in. The resulting 10,000 combined samples from the four MCMC chains are thinned by a factor of 10. Model simulations are then run at the 1000 sampled
model parameter values. We use thinning so that potentially expensive model simulations are run using relatively independent parameter values, taking advantage of the fact that for typical applications running the MCMC with the learned normalising flow is much faster than running model simulations. The number of training rounds is set to \(R=10\), resulting in a total of 10,000 model simulations. After \(R\) rounds, we use \(q_{R,\phi}(S(y)\mid\theta)\) to run 100,000 MCMC iterations targeting the approximate posterior.
RSNL is implemented using the JAX(Bradbury et al., 2018) and NumPyro(Phan et al., 2019) libraries.4 All computations were done using four single-core Intel Xeon CPU processors provided through Google Colab.
Footnote 4: These libraries were selected as we find they lead to orders of magnitude speed-up over the PyTorch(Paszke et al., 2019) and Pyro(Bingham et al., 2019) packages for MCMC sampling.
### Contaminated Normal
Here we consider the contaminated normal example from Frazier and Drovandi (2021) to assess how SNL and RSNL perform under model misspecification. In this example, the DGP is assumed to follow:
\[y_{i}=\theta+\epsilon_{i},\quad\epsilon_{i}\overset{\text{i.i.d.}}{\sim} \mathcal{N}(0,1),\]
where \(i=1,\dots,100\). However, the actual DGP follows:
\[y_{i}=\begin{cases}\theta+\epsilon_{1,i},&\epsilon_{1,i}\sim\mathcal{N}(0,1),\text{ with probability }\omega\\ \theta+\epsilon_{2,i},&\epsilon_{2,i}\sim\mathcal{N}(0,\sigma_{\epsilon}^{2} ),\text{ with probability }1-\omega\end{cases}.\]
The sufficient statistic for \(\theta\) under the assumed DGP is the sample mean, \(S_{1}(y)=\frac{1}{100}\sum_{i=1}^{100}y_{i}\). For demonstration purposes, let us also include the sample variance, \(S_{2}(y)=\frac{1}{99}\sum_{i=1}^{10}(y_{i}-S_{1}(y))^{2}\). When \(\sigma_{\epsilon}\neq 1\), we are unable to replicate the sample variance under the assumed model. The actual DGP is set to \(\omega=0.8\) and \(\sigma_{\epsilon}=2.5\) and hence the sample variance is incompatible. Since \(S_{1}(y)\) is sufficient, so is \(S(y)\) and one might still be optimistic that useful inference will result. To investigate the impact of misspecification, the observed summary is set to \(S(y)=(1.0,2.0)^{\top}\), where the sample mean is the expected value at the true parameter, but the observed sample variance significantly deviates from what can be generated from the assumed DGP. Under the assumed DGP we have that \(b(\theta)=(\theta,1)^{\top}\), for all \(\theta\in\Theta\). We thus have \(\inf_{\theta\in\Theta}||b(\theta)-b_{0}||>0\), and our model meets the criteria for misspecification as outlined in Section 2.2. We use the prior, \(\theta\sim\mathcal{N}(0,10^{2})\).
Figure 1: Posterior plots for the contaminated normal model. The leftmost plot shows the estimated univariate SNL (dashed) and RSNL (solid) posterior densities for \(\theta\). The true parameter value is shown as a vertical dashed line. The right two plots show the estimated marginal posterior (solid) and prior (dashed) densities for the components of \(\Gamma\).
It is evident in Figure 1 that RSNL gives reliable inference with high posterior density surrounding the true parameter value, \(\theta=1\). In stark contrast, SNL gives unreliable inference with negligible support around the true parameter value.
Figure 1 also includes the posteriors for the components of \(\Gamma\). For \(\gamma_{1}\) (associated with the compatible summary statistic), the prior and posterior are effectively indistinguishable. This is consistent with the behaviour of RBSL. For \(\gamma_{2}\) (associated with the incompatible statistic), the misspecification is detected since the posterior has high density away from 0. Visual inspection is sufficient here for the modeller to detect the misspecified summary and provides some insight to adjust the model accordingly. The introduction of additional auxiliary variables does not come with excessive computational costs. For this example, the total computation time for RSNL is around 40 minutes and for SNL is around 10 minutes.
### Misspecified MA(1)
We follow the misspecified moving average (MA) of order 1 example in Frazier and Drovandi (2021), where the assumed DGP is an MA(1) model, \(y_{t}=w_{t}+\theta w_{t-1}\), \(-1\leq\theta\leq 1\) and \(w_{t}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,1)\). However, the true DGP is actually a stochastic volatility model of the form:
\[y_{t}=\exp\left(\frac{z_{t}}{2}\right)u_{t},\quad z_{t}=\omega+\kappa z_{t-1}+ v_{t}+\sigma_{v},\]
where \(0<\kappa,\sigma_{v}<1\), and \(u_{t},v_{t}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,1)\). We generate the observed data using the parameters, \(\omega=-0.76\), \(\kappa=0.90\) and \(\sigma_{v}=0.36\). The data is summarised using the autocovariance function, \(\zeta_{j}(x)=\frac{1}{T}\sum_{i=j}^{T}x_{i}x_{i-j-1}\), where \(T\) is the number of observations and \(j\in\{0,1\}\) is the lag. We use the prior \(\theta\sim\mathcal{U}(-1,1)\) and set \(T=100\).
It can be shown that for the assumed DGP, \(b(\theta)=(1+\theta^{2},\theta)^{\top}\). Under the true DGP, \(b_{0}=(\exp(\frac{\omega}{1-\kappa}+\frac{\sigma_{v}^{2}}{2(1-\kappa^{2})}),0 )^{\top}\approx(0.0007,0)^{\top}\). As evidently \(\inf_{\theta\in\Theta}||b(\theta)-b_{0}||>0\), the model is misspecified as outlined in Section 2.2. We also have a unique pseudo-true value with \(||b(\theta)-b_{0}||\) minimised at \(\theta=0\). The desired behaviour for our robust algorithm is to detect incompatibility in the first summary statistic and centre the posterior around this pseudo-true value. As the first element of \(b_{0}\) goes from \(1\to 0\), \(||b(\theta)-b_{0}||\) increases and the impact of model misspecification becomes more pronounced. We set \(S(y)=(0.01,0)^{\top}\) as a representative observed summary from the true DGP to assess the performance of SNL and RSNL under heavy misspecification.
Figure 2 shows that RSNL both detects the incompatible sample variance statistic and ensures that the approximate posterior concentrates onto the parameter value that favours matching of the compatible statistic, i.e. \(\theta=0\). SNL, however, is biased and has less support for the pseudo-true value.
As expected, \(\gamma_{1}\) (corresponding to the incompatible statistic) has significant posterior density away from 0 as seen in Figure 2. Also, the posterior for \(\gamma_{2}\) (corresponding to the compatible statistic) closely resembles the prior. The computational price of making inferences robust for the misspecified MA(1) model is minimal, with RSNL taking around 20 minutes to run and SNL taking around 10 minutes.
### Contaminated SLCP
The simple likelihood complex posterior (SLCP) model devised in Papamakarios et al. (2019) is a popular example in the SBI literature. The assumed DGP is a bivariate normal distribution with the mean vector, \(\mu_{\theta}=(\theta_{1},\theta_{2})^{\top}\), and covariance matrix:
\[\Sigma_{\theta}=\begin{bmatrix}s_{1}^{2}&\rho s_{1}s_{2}\\ \rho s_{1}s_{2}&s_{2}^{2}\end{bmatrix},\]
where \(s_{1}=\theta_{3}^{2}\), \(s_{2}=\theta_{4}^{2}\) and \(\rho=\tanh(\theta_{5})\). This results in a nonlinear mapping from \(\theta=(\theta_{1},\theta_{2},\theta_{3},\theta_{4},\theta_{5})\in\mathbb{R}^ {5}\to y\in\mathbb{R}^{2}\). The posterior is "complex" having multiple modes due to squaring as well as vertical cutoffs from the uniform prior that we define in more detail later. Hence, the likelihood is expected to be easier to emulate than the posterior, making it suitable for an SNL type of approach. Four draws are generated from this bivariate distribution giving the likelihood, \(g(y\mid\theta)=\prod_{j=1}^{4}\mathcal{N}(y_{j};\mu_{\theta},\Sigma_{\theta})\) for \(y=(y_{1},y_{2},y_{3},y_{4})\). No summarisation is done and the observed data is used in place of the summary statistic. We generate the observed data at parameter values, \(\theta=(0.7,-2.9,-1.,-0.9,0.6)^{\top}\), and place an independent \(\mathcal{U}(-3,3)\) prior on each component of \(\theta\).
To impose misspecification on this illustrative example, we draw a contaminated 5-th observation, \(y_{5}\) and use the observed data \(y=(y_{1},y_{2},y_{3},y_{4},y_{5})\). Contamination is done by applying the (stochastic) misspecification transform considered in Cannon et al. (2022), \(y_{5}=x_{5}+100z_{5}\), where \(x_{5}\sim\mathcal{N}(\mu_{\theta},\Sigma_{\theta})\), and \(z_{5}\sim\mathcal{N}((0,0)^{\top},100\mathbb{I}_{2})\). The assumed DGP is not compatible with this contaminated observation, and ideally the approximate posterior would ignore the influence of this observation.5
Footnote 5: Due to the stochastic transform, there is a small chance that the contaminated draw is compatible with the assumed DGP. However, the observed contaminated draw considered here is \((-172.7,-79.9)^{\top}\), which is very unlikely under the assumed DGP.
We thus want our inference to only use information from the four draws from the true DGP. The aim is to closely resemble the SNL posterior where the observed data is the four non-contaminated draws. Figure 3 shows the estimated posterior densities for SNL (for both compatible and incompatible summaries) and RSNL for the contaminated SLCP example. When including the contaminated 5-th draw, SNL produces a nonsensical posterior with little useful information. Conversely, the RSNL posterior has reasonable density around the true parameters and has identified the separate modes.
The first eight compatible statistics are shown in Figure 4. The prior and posteriors reasonably match each other. In contrast, the observed data from the contaminated draw
Figure 2: Posterior plots for the misspecified MA(1) model. The leftmost plot shows the estimated univariate SNL (dashed) and RSNL (solid) posterior densities for \(\theta\). The true parameter value is shown as a vertical dashed line. The right two plots show the estimated marginal posterior (solid) and prior (dashed) densities for the components of \(\Gamma\).
is recognised as being incompatible and has significant density away from 0 as evident in Figure 5. Again, there is not a significant computational burden induced to estimate the adjustment parameters, with a total computational time of around 6 hours to run RSNL and around 4 hours for SNL.
Figure 3: Univariate and bivariate density plots of the estimated posterior for \(\theta\) on the SLCP example. Plots on the diagonal are the univariate posterior densities obtained by RSNL (solid) and SNL (dashed) on the contaminated SLCP example, and for SNL without the contaminated draw (dotted). The bivariate posterior distributions for contaminated SLCP are visualised as contour plots when applying RSNL (solid, lower triangle off-diagonal) and SNL (dashed, upper triangle off-diagonal). The true parameter values are visualised as a vertical dashed line for the marginal plots and the \(\times\) symbol in the bivariate plots.
## 5 Discussion
In this work, we have introduced a new neural SBI method that is robust to model misspecification. To our knowledge, this is the first method that both detects and corrects for model misspecification that targets the likelihood or uses sequential sampling. RSNL was shown on several illustrative examples to be robust to model misspecification while still conducting efficient inference.
We have shown that RSNL can provide useful inference with a fraction of the number of simulation calls of ABC and BSL methods. For example, only 10,000 model simulations were run to produce the RSNL posterior for the contaminated normal model. In contrast, RBSL in Frazier and Drovandi (2021) used in the order of millions of model simulations. A more rigorous comparison, such as the benchmarks in Lueckmann et al. (2021), could be done between ABC, BSL and neural SBI methods to ascertain their robustness to model misspecification and behaviour across different numbers of simulations. Such a benchmark would ideally include challenging applications with real-world data to demonstrate the utility of these methods for scientific applications.
In the mean adjustment approach of RBSL, Frazier and Drovandi (2021) account for the different summary scales and the fact that these scales could be \(\theta\)-dependent by adjusting the mean using \(\mu_{m}(\theta)+\sigma_{m}(\theta)\circ\Gamma\), where \(\sigma(\theta)\) is a vector of estimated standard deviations of the model summaries at \(\theta\). In RBSL, these standard deviations are estimated from the \(m\) model simulations generated based on \(\theta\). Analogously, we could consider a similar
Figure 4: Estimated marginal posterior (solid) and prior (dashed) for components of \(\Gamma\) corresponding with the non-contaminated draws.
Figure 5: Estimated marginal posterior (solid) and prior (dashed) for components of \(\Gamma\) corresponding with the contaminated draw.
approach in RSNL and define the target
\[\pi(\theta,\Gamma\mid S(y))\propto q_{\phi}(S(y)-\sigma(\theta)\circ\Gamma\mid \theta)\pi(\theta)\pi(\Gamma).\]
The question then becomes, how do we estimate \(\sigma(\theta)\) in the context of RSNL? In the MCMC phase we do not want to generate more model simulations as this would be costly. If we believed that the standard deviation of the model summaries had little dependence on \(\theta\), we could set \(\sigma(\theta)=\sigma=\sigma(\hat{\theta})\) where \(\hat{\theta}\) is some reasonable point estimate of the parameter. Another approach would, for each \(\theta\) proposed in the MCMC, estimate \(\sigma(\theta)\) using surrogate model simulations generated using the fitted normalising flow. This would be much faster than using actual model simulations, but could still slow down the MCMC phase substantially. Instead of using a normalising flow, we could train a mixture density network (Bishop, 1994) to emulate the likelihood, which would then lead to an analytical expression for \(\sigma(\theta)\). A multivariate mixture density network could replace the flow completely, or the multivariate flow for the joint summary could be retained and a series of univariate mixture density networks applied to each summary statistic for the sole purpose of emulating \(\sigma(\theta)\). We plan to investigate these options in future research.
One might be concerned that the inclusion of adjustment parameters will introduce noise into the estimated posterior to a deleterious extent. Empirically we have found this to have a negligible effect, especially for the considered prior choice. This is consistent with the findings for RBSL in Frazier and Drovandi (2021). Additionally, Hermans et al. (2022) noted that SBI methods (including SNL) tend to produce overconfident posterior approximations. Hence, it seems unlikely that the small amount of noise from the adjustment parameters would result in overly conservative posterior estimates.
A modeller can determine what summaries are misspecified by comparing the prior and posterior densities for each component of \(\Gamma\). We relied on visual inspection of the adjustment parameters for the presented examples. This could be tedious when considering a large number of summaries, and an automated approach could be considered instead.
Here we considered an adjustment approach through an MCMC sampling scheme as in Frazier and Drovandi (2021). However, sequential neural variational inference (SNVI, Glockler et al., 2022) can provide useful inference with a reduced number of model simulations compared to other sequential neural methods, such as SNL. SNVI targets either the likelihood or the likelihood-ratio, another common target in SBI (Durkan et al., 2020; Hermans et al., 2020)). Future work could investigate the impact of model misspecification on SNVI and to make SNVI robust through the incorporation of adjustment parameters. The adjustment could be to the likelihood as in RSNL, or through an adjustment approach that targets the likelihood-ratio.
The choice of \(\pi(\Gamma)\) was found to be important in practice. Our prior choice was based on the dual requirements to minimise noise introduced by the adjustment parameters if the summaries are compatible, and to be capable of shifting the summary a significant distance from the origin if they are incompatible. The horseshoe prior is an appropriate choice for these requirements. Further work could consider how to implement this robustly in a NUTS sampler. Another approach is the spike-and-slab prior as in Ward et al. (2022). Further work is needed to determine the most appropriate prior.
The relation between model misspecification in SBI as in Frazier et al. (2020) and OOD data detection in machine learning (Yang et al., 2021) could be investigated more thoroughly. Cannon et al. (2022) obtained favourable results using OOD detection methods based on ensemble posteriors (mixtures of independent posteriors, Lakshminarayanan et al.
(2017)) and sharpness-aware minimisation (Foret et al., 2021). Potentially the benefits of these OOD methods could be heightened when also using adjustment parameters.
## Acknowledgements
Ryan P. Kelly was supported by an Australian Research Training Program Stipend and a QUT Centre for Data Science Top-Up Scholarship. Christopher Drovandi was supported by an Australian Research Council Future Fellowship (FT210100260).
|
2309.06621 | A Reinforcement Learning Approach for Robotic Unloading from Visual
Observations | In this work, we focus on a robotic unloading problem from visual
observations, where robots are required to autonomously unload stacks of
parcels using RGB-D images as their primary input source. While supervised and
imitation learning have accomplished good results in these types of tasks, they
heavily rely on labeled data, which are challenging to obtain in realistic
scenarios. Our study aims to develop a sample efficient controller framework
that can learn unloading tasks without the need for labeled data during the
learning process. To tackle this challenge, we propose a hierarchical
controller structure that combines a high-level decision-making module with
classical motion control. The high-level module is trained using Deep
Reinforcement Learning (DRL), wherein we incorporate a safety bias mechanism
and design a reward function tailored to this task. Our experiments demonstrate
that both these elements play a crucial role in achieving improved learning
performance. Furthermore, to ensure reproducibility and establish a benchmark
for future research, we provide free access to our code and simulation. | Vittorio Giammarino, Alberto Giammarino, Matthew Pearce | 2023-09-12T22:22:28Z | http://arxiv.org/abs/2309.06621v1 | # A Reinforcement Learning Approach for Robotic Unloading from Visual Observations
###### Abstract
In this work, we focus on a robotic unloading problem from visual observations, where robots are required to autonomously unload stacks of parcels using RGB-D images as their primary input source. While supervised and imitation learning have accomplished good results in these types of tasks, they heavily rely on labeled data, which are challenging to obtain in realistic scenarios. Our study aims to develop a _sample efficient_ controller framework that can learn unloading tasks _without the need for labeled data_ during the learning process. To tackle this challenge, we propose a hierarchical controller structure that combines a high-level decision-making module with classical motion control. The high-level module is trained using Deep Reinforcement Learning (DRL), wherein we incorporate a safety bias mechanism and design a reward function tailored to this task. Our experiments demonstrate that both these elements play a crucial role in achieving improved learning performance. Furthermore, to ensure reproducibility and establish a benchmark for future research, we provide free access to our code and simulation.
## I Introduction
Robotic unloading is generally defined as the set of tasks in which robots are deployed to unload items from containers, trucks, or other transportation vehicles. The successful progress of this technology represents a compelling opportunity for the future, as it can address various challenges encountered in logistics, manufacturing, and warehousing. Within these industries, unloading tasks involve physically demanding and repetitive actions that can pose risks for human workers. In this regard, robotic unloading offers a way to enhance workers' safety and mitigate hazards associated with heavy lifting and challenging work environments.
In this paper, we investigate robotic unloading tasks from visual observations (see Fig. 1 for an overview of the environment). Specifically, our objective is to enable a robotic manipulator to autonomously unload stacks of parcels by using RGB-D images as primary input source. We formulate this problem as a three-dimensional pick-and-place task, where the parcels, arranged in piles, are picked from the stack and placed on a floor conveyor. Previous studies have addressed pick-and-place by integrating objects' pose estimation [1, 2] with scripted planning and motion control [3]. While these systems demonstrate robustness in structured environments, they are unsuitable for uncertain and unstructured settings, which require improved generalization capabilities. In order to address these challenges, recent years have witnessed a surge of interest in machine learning techniques. In particular, end-to-end Reinforcement Learning (RL) has been successfully used for pick-and-place in [4, 5]. However, end-to-end RL faces difficulties in real-world scenarios due to the large amount of data required to achieve acceptable performance. To improve data efficiency, recent work has integrated RL with object-centric assumptions such as keypoints [6, 7], embeddings [8] or dense descriptors [9, 10]. These representations are typically learned through supervised learning [11], which often involves tedious and expensive data labeling or annotation processes. Another line of research has explored Imitation Learning (IL), also known as learning from demonstrations [12, 13, 14]. Despite achieving promising results, IL remains a supervised learning approach that relies on collecting expert demonstrations, which is akin to collecting labeled data and can be costly, time-consuming, or even infeasible. As a result, the main goal of this paper is to develop a _sample efficient_ controller framework that can learn the robotic unloading task in Fig. 1, _without requiring any form of labeled data_.
Towards addressing this problem, we propose a hierarchical controller structure that separates the robot decision-making process from the low-level module. The decision-making component, named high-level controller, is trained using DRL [15, 16] and more specifically Deep Q-Learning (DQL) [17] from RGB images. Meanwhile, the low-level module relies on classical trajectory planning and motion control techniques. Within this framework, our work makes two main contributions.
From an algorithmic perspective, our main novelties lie in the high-level controller, aiming to improve the sample efficiency of our DQL pipeline. First, we equip our Deep Q-Networks with a _safety bias mechanism_ with the goal of biasing the decision policy towards safe end-effector configurations. Note that an end-effector configuration is considered safe if it is reachable, i.e., it is contained within the robot workspace. Additionally, we propose a task-specific reward function _which takes into account the verticality_ of our unloading task. In order to test the impact of these mechanisms on learning performance, we conduct an ablation study and show how both these elements are crucial to accomplish improved results in our task.
Our second contribution involves the development of a simulated environment using the PyBullet physics simulator [18]. Simulators play a crucial role in prototyping RL algorithms as they offer a cost-effective and risk-free testing
environment. This aspect is particularly valuable for industry-oriented tasks like robotic unloading, which are challenging to replicate in a research setting. Having a simulator available becomes essential in facilitating new research in this domain. Therefore, we provide open access to our environment, with the goal of establishing an interesting benchmark for future studies in this field.
The remainder of the paper is organized as follows: Section II provides a summary of the work related to robotic unloading. Section III introduces notation and background on RL. Section IV provides a detailed description of the simulation environment. Section V presents the hierarchical controller and outlines the algorithm used to train the high-level controller. Finally, Section VI presents our experimental results and Section VII concludes the paper providing a general discussion on our findings.
## II Related Work
In the following, we briefly review the most relevant work on robotic unloading. One of the earliest studies in this field is presented in [19], which offers an overview of the main technical challenges associated with automatic unloading. This study proposes solutions based on a 3D laser scanner perception system and scripted trajectories.
In recent years, there has been a significant focus on leveraging deep learning (DL) techniques to enhance perception systems tailored to robotic unloading tasks. Papers such as [20, 21] formulate algorithms for accurate object detection and parcel segmentation within the context of robotic unloading. However, these studies primarily concentrate on perception and do not address the decision-making and control aspects of the robot.
Other studies have explored the integration of parcel segmentation with robotic control problems. In [22] for instance, perception, planning, and control are individually addressed and subsequently combined within a unified framework. In particular, a RGB-D camera is attached to the robot gripper for perception, and segmentation models are utilized to identify the safest object in the scene. A customized motion planning framework is then employed for control. Similarly, in [23], a 3D vision processing algorithm is introduced for parcel segmentation, enabling on-the-fly determination of multiple gripping strategies.
Regarding the decision-making problem, papers such as [24, 25] introduce a reasoning framework based on decision trees for optimal unloading, which is then combined with motion planning. However, these papers do not explicitly address the perception problem. Similarly, in [26], a Q-learning-based algorithm is proposed for optimal decision-making, assuming accurate detection, perfect stacking, and uniform-sized boxes.
Compared to these studies, our research takes a comprehensive approach to the unloading problem. We explore controllers that integrate perception, decision-making, and motion control as an interconnected system and treat this interconnection as a unified problem.
## III Preliminaries
Unless indicated otherwise, we use uppercase letters (e.g., \(S_{t}\)) for random variables, lowercase letters (e.g., \(s_{t}\)) for values of random variables, script letters (e.g., \(\mathcal{S}\)) for sets and reference frames, bold lowercase letters (e.g., \(\mathbf{\theta}\)) for vectors and bold uppercase letters (e.g. \(\mathbf{H}\)) for matrices. We denote expectation as \(\mathbb{E}[\cdot]\).
Our decision-making process is modeled as a finite-horizon discounted Markov Decision Process (MDP) described by the tuple \((\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R},\rho_{0},\gamma)\), where \(\mathcal{S}\) is the set of states and \(\mathcal{A}\) is the set of actions. \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\to P(\mathcal{S})\) is the transition probability function where \(P(\mathcal{S})\) denotes the space of probability distributions over \(\mathcal{S}\), \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is the reward function which maps state-action pairs to scalar rewards, \(\rho_{0}\in P(\mathcal{S})\) is the initial state distribution, and \(\gamma\in[0,1)\) the discount factor. We define the agent as a stationary policy \(\pi:\mathcal{S}\to P(\mathcal{A})\), where \(\pi(a|s)\) is the probability of taking action \(a\) in state \(s\). When a function is parameterized with parameters \(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{k}\) we write \(\pi_{\mathbf{\theta}}\).
Reinforcement learningGiven an MDP and a stationary policy \(\pi:\mathcal{S}\to P(\mathcal{A})\), the RL objective is to learn an optimal policy, \(\pi^{\star}\), which maximizes the expected total discounted reward
\[J(\pi)=\mathbb{E}_{\pi}\Big{[}\sum_{t=0}^{T}\gamma^{t}\mathcal{R}(s_{t},a_{t}) \Big{]}, \tag{1}\]
Fig. 1: Overview of our robotic unloading setup. Fig. 0(a) illustrates a snapshot of the visual observations available to the robot for the decision-making process. These observations are captured by a camera positioned on the top-left side of the robot and consist in \(720\times 1280\) high resolution RGB-D images. Fig. 0(b) presents an overview of the unloading task. Stacks of parcels are positioned in front of an industrial KUKA KR70 robotic manipulator equipped with a suction gripper. Using the visual information depicted in Fig. 0(a), the robot has to select a parcel to pick, grasp it, and then place it on the ground on its right side. The parcels in the scene are randomized in terms of color within the brown spectrum. A video demonstrating the task is available in the Supplementary Materials.
where \(\tau=(s_{0},a_{0},s_{1},a_{1},\ldots,a_{T-1},s_{T})\) are trajectories sampled according to \(s_{0}\sim\rho_{0}\), \(a_{t}\sim\pi(\cdot|s_{t})\) and \(s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a_{t})\), and \(T\) is the number of decision steps in a single episode. Note that, neither the transition probability function \(\mathcal{T}\) nor the reward function \(\mathcal{R}\) are available to the learning agent. Therefore, the agent interacts with the environment, collects transitions \((s_{t},a_{t},r_{t},s_{t+1})\), where \(r_{t}=\mathcal{R}(s_{t},a_{t})\), and exploits these transitions to estimate \(J(\pi)\) in (1) in order to learn \(\pi^{*}\) as \(\pi^{*}\in\arg\max_{\pi}J(\pi)\).
Well-known algorithms solve the RL problem by estimating, and optimizing, the value functions induced by the policy \(\pi\). We define the state value function as \(V^{\pi}(s)=\mathbb{E}_{\tau}[\sum_{t=0}^{T}\gamma^{t}\mathcal{R}(s_{t},a_{t})|S_ {0}=s]\) and the state-action value function as \(Q^{\pi}(s,a)=\mathbb{E}_{\tau}[\sum_{t=0}^{T}\gamma^{t}\mathcal{R}(s_{t},a_{t}) |S_{0}=s,A_{0}=a]\). Furthermore, \(V^{\pi}(s)=\mathbb{E}_{a\sim\tau(\downarrow s)}[Q^{\pi}(s,a)]\). Note that, the optimal policy \(\pi^{*}\in\arg\max_{\pi}J(\pi)\) induces the optimal state value function
\[V^{*}(s)=\max_{\pi}V^{\pi}(s),\quad\forall\,s\in\mathcal{S}.\]
Therefore, given \(V^{*}(s)\) and assuming \(\mathcal{T}\) is known, we can retrieve \(\pi^{*}(s)\) for all \(s\in\mathcal{S}\) as the action \(a\in\mathcal{A}\) that leads to the highest expected return \(\mathbb{E}_{s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a)}[V^{*}(s_{t+1})]\) for all \(s_{t}\in\mathcal{S}\).
In the RL setting, where \(\mathcal{T}\) is unknown, Temporal Difference (TD) methods [15] compute \(V^{*}(s)\) and \(\pi^{*}(s)\) by leveraging the following iterative updating rule for \(Q^{\pi}(s_{t},a_{t})\):
\[Q^{\pi}(s_{t},a_{t})\gets Q^{\pi}(s_{t},a_{t})+\alpha(Y-Q^{\pi}(s_{t},a_{t })), \tag{2}\]
where \(\alpha\) is a learning rate and \(Y\) is a target value as in a standard regression problem.
Given a set of transitions \((s_{t},a_{t},r_{t},s_{t+1},a_{t+1})\), generated by the interactions of the policy \(\pi\) with the environment; setting \(Y=r_{t}+\gamma Q^{\pi}(s_{t+1},a_{t+1})\) yields the on-policy update typical of SARSA [15]. On the other hand, setting \(Y=r_{t}+\gamma\max_{a}Q^{\pi}(s_{t+1},a)\) yields the off-policy update used in \(Q\)-learning [15]. Provided the optimal state-action value function \(Q^{*}(s,a)\), then \(V^{*}(s)=\max_{a}Q^{*}(s,a)\) and \(\pi^{*}(s)\in\arg\max_{a}Q^{*}(s,a)\) for all \(s\in\mathcal{S}\).
## IV Simulated Environment
In the following we provide a detailed description of our robotic unloading task. The environment is summarized in Fig. 1 and is based on the PyBullet physics simulator [18]. The simulated environment can be accessed at our GitHub repository1.
Footnote 1: [https://github.com/VittorioGiammarino/RL-for-unloading-from-pixels](https://github.com/VittorioGiammarino/RL-for-unloading-from-pixels)
AgentThe decision agent controls an industrial 6-axis KUKA KR70 robotic manipulator. The robot inertial frame corresponds to the world frame and is denoted with \(\mathcal{I}\). We refer to the end-effector pose with the homogeneous transformation \(\mathbf{H}_{\mathcal{I}\mathcal{E}}\), where \(\mathcal{E}\) is the robot end-effector frame. The robot end-effector is a suction gripper with a contact sensor which enables the contact detection between the gripper and the other objects of the scene. The unloading task relies on the visual observations illustrated in Figure 0(a) and 0(b). These images are captured by a RGB-D camera positioned at the top-left side of the robot, providing \(720\times 1280\) high-resolution images. The intrinsic and extrinsic parameters of the camera are known, and along with the depth image, are used to transform from the 2D pixel space to the robot inertial frame \(\mathcal{I}\)[27]. Additionally, the agent is provided with the robot workspace boundaries in \(\mathcal{I}\), denoted with \(W_{\mathcal{I}}\).
TaskFor each episode, our task involves unloading a stack of \(42\) parcels, arranged as shown in Fig. 1. Each parcel is a cube with \(0.25m\) edge length and randomized color surface within the brown spectrum. At each decision step, the agent is required to select a parcel, grasp it, and then place it on the ground, where a floor conveyor will complete the unloading. The complexity of this problem, compared to more classical pick-and-place or rearrangement tasks, arises from the sequential nature of the decision-making process and the limited margin of error allowed by the task. In this regard, Fig. 2 illustrates two episodes generated by different decision policies. In the upper row, a successful unloading sequence is depicted, where the agent successfully unloads all the \(42\) parcels. Conversely, the lower row shows an unsuccessful unloading sequence, where a single decision has a negative impact on the subsequent scene, ultimately compromising the agent's final outcome. Note that the simulation environment is modelled as a finite-horizon MDP with \(T=42\), i.e., the number of decision steps is equivalent to the number of initialized parcels at \(t=0\). When a mistake is made, as shown in Fig. 2 (lower), we remove the \(n\) parcels that land outside the agent's workspace \(W_{\mathcal{I}}\), and the agent incurs a time penalty of \(n\), i.e. \(t\gets t+n\) rather than \(t\gets t+1\). Similarly, when the agent attempts picks out of \(W_{\mathcal{I}}\), a parcel is removed from the top of the stack and \(t\gets t+1\). In other words, the agent is forced to skip a step. This ensures \(T=42\) for all the episodes, preventing infinite loops during the training process. Note that infinite loops, like repeatedly choosing parcels outside of \(W_{\mathcal{I}}\), tend to occur frequently in the initial stages of the learning process and can prevent progress if not addressed.
## V Methods
In this section, we introduce our hierarchical controller framework, consisting of a high-level decision-making module and a low-level trajectory planning and motion control module (cf. Fig. 3). Moreover, we provide a comprehensive description of our DQL pipeline, used to train the agent's decision-making policy.
High-level controllerGiven a visual observation \(s_{t}\in\mathcal{S}\), the goal of the high-level controller is to provide a corresponding picking end-effector pose, denoted by the homogeneous transformation \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pick}}\), which maximizes the expected number of unloaded parcels over the episode. Hence, at each decision step \(t\), we want
\[f(s_{t})\rightarrow\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pick}},\quad t\in[0, T). \tag{3}\]
The function \(f\) in (3) is defined as a composition of two main steps. In the first step, a policy \(\pi_{\mathbf{\theta}}:\mathcal{S}\to P(\mathcal{A})\) takes a \(64\times 64\) RGB image of the scene and selects a single pixel \(a_{t}=(u,v)_{t}\). The \(64\times 64\) image is a cropped and
resized version of the original \(720\times 1280\) image in Fig. (a)a. Note that the original image is cropped assuming prior knowledge of the area containing the stack of parcels during the unloading process. Using an aligned depth image, the selected \(a_{t}=(u,v)_{t}\) is transformed into Cartesian coordinates, representing the picking end-effector position in \(\mathcal{I}\) denoted with \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\). The picking position is then associated with a rotation matrix \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), corresponding to the picking end-effector orientation. Picking position \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) and orientation \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) yield the picking end-effector pose \(\mathbf{H}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), where, compared to the notation in (3), we explicitly state the dependency on \(a_{t}=(u,v)_{t}\). In this work, we assume a fixed end-effector orientation \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), which is computed to be orthogonal to the plane of the stack front surface (the visible surface in Fig. (a)a, 2 and 3). It is important to note that \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) and \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) are interdependent choices. In our setting, we condition \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) on \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), i.e., \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) is learned given \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\). The alternative setting, where \(\mathbf{C}_{\mathcal{IE}}^{\text{pick}}(a_{t})\) is conditioned on \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a_{t})\), is left for future work.
Learning algorithmThe picking policy \(\pi_{\mathbf{\theta}}\) is learned via DQL [17]. We define \(K\) critic networks as \(Q_{\mathbf{\theta}_{k}}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), parameterized by a \(43\)-layer encoder-decoder residual network (ResNet) [28] where the same encoder is shared across all the critic networks.
The critic networks output picking confidence values over the pixel space and are trained to minimize:
\[\mathcal{L}_{\mathbf{\theta}_{k}}(\mathcal{B}) =\mathbb{E}_{(s_{t},a_{t},r_{t},s_{t+1})-\mathcal{B}}[(y_{t}-Q_{ \mathbf{\theta}_{k}}(s_{t},a_{t}))^{2}], \tag{4}\] \[y_{t} =r_{t}+\gamma\max_{a}\min_{k}Q_{\mathbf{\theta}_{k}}(s_{t+1},a), \tag{5}\]
where \(\mathcal{B}\) is a replay buffer (cf. [17]) and \(\mathbf{\bar{\theta}}_{k}\) are the slow moving weights for the target critic networks [29]. We use multiple critics and target networks to address the optimistic bias introduced by function approximation [30, 31]. Moreover, we define two different reward functions:
\[\mathcal{R}_{b}(s,a) =\begin{cases}1,\text{ if picking succeeds},\\ 0,\text{ otherwise},\end{cases} \tag{6}\] \[\mathcal{R}_{v}(s,a) =\begin{cases}1+\lambda\cdot\hat{z}_{\mathcal{IE}}^{\text{pick}} (a),\text{ if picking succeeds},\\ 0,\text{ otherwise},\end{cases}\]
where \(\lambda\) is a scalar hyperparameter, and \(z_{\mathcal{IE}}^{\text{pick}}(a)\) is the \(z\)-coordinate in \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a)\) where the \(z\)-axis represents the up direction in \(\mathcal{I}\). Both the functions in (6) provide rewards when picking succeeds. Moreover, \(\mathcal{R}_{v}(s,a)\) adds a bonus proportional to the \(z\)-coordinate, i.e., the height, of the picking position. This bonus incentivizes the unloading of parcels when in the upper part of the stack. In other words, it disincentivizes behaviors which can lead to the collapse of the stack as in the lower row episode in Fig. 2. We experimentally show that \(\mathcal{R}_{v}(s,a)\) remarkably improves performance when compared to \(\mathcal{R}_{b}(s,a)\) in our task.
The loss in (4)-(5) is computed by sampling interactions \((s_{t},a_{t},r_{t},s_{t+1})\) from a replay buffer \(\mathcal{B}\). During training, the agent interacts with the environment according to the exploration policy summarized in Algorithm 1. The safety bias layer, named Mask in Algorithm 1, is defined as
\[\text{Mask}(Q_{\mathbf{\bar{\theta}}_{k}}(s,a))=\begin{cases}Q_{\mathbf{ \bar{\theta}}_{k}}(s,a)+b,\quad\text{if }\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a)\in W_{\mathcal{I}},\\ Q_{\mathbf{\bar{\theta}}_{k}}(s,a),\quad\text{otherwise},\end{cases} \tag{7}\]
where \(W_{\mathcal{I}}\) is the robot workspace and \(b\) is a positive definite safety bias treated as an hyperparameter. The main goal of Mask in (7) is to bias the policy towards the actions with a corresponding \(\mathbf{p}_{\mathcal{IE}}^{\text{pick}}(a)\in W_{\mathcal{I}}\). We experimentally show that this layer becomes crucial to improve performance and efficiency in our task. During evaluation, given \(s\), the actions
Fig. 3: Summary of our hierarchical controller. The high-level controller selects a pixel \(a=(u,v)\) from an observation and computes a picking pose denoted with \(\mathbf{H}_{\mathcal{IE}}^{\text{pick}}(a)\). This pose is passed to a planner, which generates a trajectory of Cartesian waypoints. The low-level controller transforms this Cartesian trajectory into a joint space trajectory solving an inverse kinematics problem. The resulting trajectory serves as a reference for the PD controller of the actuators.
Fig. 2: The evolution of the scene is illustrated in two different episodes. In the upper row episode, the agent follows an optimal decision policy, successfully unloading all the parcels. In the lower row episode, the policy is suboptimal and a single wrong decision impacts the subsequent organization of the scene, affecting the agent’s final outcome and undermining the entire unloading process.
are generated following \(a\in\arg\max_{a}\text{Mask}(\min_{k}Q_{\mathbf{\bar{q}}_{k}}(s,a))\) where we use Mask in (7) as additional safety measure. We provide a summary of the full algorithm in Appendix.
```
Input: \(s\), \(Q_{\mathbf{\bar{q}}_{k}}\), \(\epsilon\), Mask, \(W_{\mathcal{I}}\), \(\mathcal{U}\), \(\sigma\): state (\(64\times 64\) RGB image), target critic networks, exploration probability, safety bias layer, robot workspace, uniform distribution, and softmax function. begin ExplorationAction[\(s\)] \(u\sim\mathcal{U}(0,1)\) if\(u\leq\epsilon\)then \(a\sim\mathcal{U}(u_{W_{\mathcal{I}}})\) where \(a_{W_{\mathcal{I}}}\) denotes the action \(a\) such that \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\in W_{\mathcal{I}}\) else \(a\sim\sigma(\text{Mask}(\min_{k}Q_{\mathbf{\bar{q}}_{k}}(s,a)))\) return\(a\)
```
**Algorithm 1**Exploration Policy
Low-level moduleThe low-level module comprises a trajectory planner and a low-level controller. The planner receives \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a_{t})\) from the high-level controller and provides a set of \(5\) end-effector waypoints to the low-level controller. This set of waypoints consists of: \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-place}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{place}}\), \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{out-of-camera}}\). Specifically, \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre}}\) represents the placing pose which is assumed to be known throughout the task. \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{out-of-camera}}\) is the pose required for an unoccluded image of the scene, as shown in Fig. 0(a). \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\) is the pre-picking pose, \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{post-pick}}\) the post-picking pose and \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-place}}\) the pre-placing pose. Between \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-pick}}\) and \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{out}}(a_{t})\), the gripper is activated as soon as a contact with a parcel is detected, after which the end-effector is moved towards \(\mathbf{H}_{\mathcal{I}\mathcal{E}}^{\text{pre-place}}\).
Provided a trajectory of Cartesian waypoints, the low-level controller transforms this trajectory from Cartesian to joint space by solving an inverse kinematics optimization problem. The joint space trajectory is then used as a reference for the PD controller of the actuators.
## VI Experiments
All our experiments focus on the unloading problem described in Section IV. We evaluate four different versions of our DQL algorithm as summarized in Table I. It is important to note that both \(\mathcal{R}_{v}\) in (6) and Mask in (7) contain two important hyperparameters, respectively \(\lambda\) and \(b\), which are kept fixed throughout our experiments. In particular, we set \(\lambda=2\) and \(b=100\). This choice for \(\lambda\) takes into account the maximum height of the stack and ensures an appropriate trade-off between the two factors in \(\mathcal{R}_{v}\). As for \(b\), its optimal value depends on the initialization of the critic networks weights and on the considered reward function. The experiments depicted in Fig. 4 show that \(b=100\) yields good empirical results for all the algorithms in which Mask is used.
As commonly done in the RL literature, in our experiments we randomize training and evaluation episodes from the same distribution. The obtained final results are summarized in Fig. 4 where the average normalized performance over \(6\) random seeds is illustrated. Specifically, the left figure shows the average number of successful picks, and the right figure shows the number of attempted picks with \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\). All the curves are normalized with respect to \(42\), i.e., the maximum number of parcels per episode. These results demonstrate that both Mask and \(\mathcal{R}_{v}\) remarkably improve performance, in particular when they are jointly used during training. Specifically, by using \(\mathcal{R}_{v}\) rather than \(\mathcal{R}_{b}\) in (6), the agent receives a more accurate feedback about the characteristics of the task and requires fewer interactions to achieve better results. Furthermore, by using Mask, we are able to effectively reduce the number of attempted picks in which \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\). This leads to an improved exploration strategy, as the agent mainly focuses on viable actions that lead to \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\in W_{\mathcal{I}}\). Conversely, when Mask is not used, the number of actions with \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\) increases, leading to less effective exploration strategies and slower learning rate.
In Fig. 5, we show, for each version of our algorithm, the seed leading to the best maximum performance among the seeds averaged in Fig. 4. These experiments more clearly emphasize the effect of Mask and \(\mathcal{R}_{v}\) on improving our final results. In Fig 5, our best policy, trained with _Mask-on, v-reward_ (cf. Table I), achieves \(99\%\) picking success over \(3\) full evaluation episodes.
The training curves in Fig. 4 and Fig. 5 are both summarized by the box plots in Fig. 6. Additional results and all the used hyperparameters are provided in Appendix. We refer to our GitHub repository for more implementation details.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & _Mask-off_ & _Mask-off, v-reward_ & _Mask-on_ & _Mask-on, v-reward_ \\ \hline Mask & ✗ & ✗ & ✓ & ✓ \\ \(\mathcal{R}_{v}\) & ✗ & ✓ & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: A summary of the different algorithms tested in our experiments. Mask denotes the use of the safety bias layer in (7), while \(\mathcal{R}_{v}\) denotes the use of \(\mathcal{R}_{v}\) rather than \(\mathcal{R}_{b}\) in (6).
Fig. 4: Ablation experiments. (Left) the average normalized number of successful picks, which represents the number of parcels successfully unloaded. (Right) the average normalized number of picks attempted with \(\mathbf{p}_{\mathcal{I}\mathcal{E}}^{\text{pick}}(a)\notin W_{\mathcal{I}}\). Both results are averaged over \(6\) seeds and the shaded area represents the standard deviation over seeds. For each seed, we randomly initialize the critic networks and train for \(10^{5}\) steps. We evaluate the learned policy every \(10\) training episodes, i.e., \(420\) steps, using average performance over \(3\) episodes. The characteristics of the tested algorithms are summarized in Table I.
## VII Conclusion
In this study, we tackle the problem of robotic unloading from visual observations and propose a hierarchical controller that does not require any labeled data. Our hierarchical controller, depicted in Fig. 3, consists of a high-level controller responsible for decision-making and a low-level module for trajectory planning and low-level control. Optimal decision-making is learned from RGB images using DQL and is converted in optimal end-effector pose leveraging an aligned depth image. In our DQL algorithm, we introduce a safety bias mechanism and a task-specific reward function which prove to be essential in order to improve our experimental results. Finally, we develop and make publicly available a simulated environment, with the goal of providing a benchmark for future studies in this field.
Limitations and future workDespite the advancement in addressing robotic unloading, it is important to understand the limitations of our approach. In the high level-controller, the main issue involves the training stability of our DQL algorithm. This problem is evident in Fig. 5, where the training progress does not show a monotonic trend. Furthermore, this instability is the main reason for the slow rate of improvement illustrated in Fig. 4. We consider this to be a crucial challenge for optimal unloading policy learning, representing a significant area for future research.
Regarding the simulated environment, we consider it as a valuable benchmark for testing RL-based methods in unloading tasks. This perspective is substantiated by the results in Fig. 4 and Fig. 5, where vanilla RL algorithms often struggle to succeed, as evidenced in the _Mask-off_ case. Moreover, prior research has addressed unloading tasks similar to what we present in our study [24, 26]. However, we acknowledge that our current simulation does not encompass all the potential variables encountered in real-world unloading scenarios. As a result, our future efforts will focus on improving our simulated environment in order to introduce more randomized settings, where the parcels can have different textures, size and shape. We emphasize this as an important research avenue towards improving performance in real-world scenarios, where parcel configurations are usually messy and uncertain.
Looking ahead, we are also considering the prospect of directly applying our training solutions to real hardware in future research endeavors. This goal presents its own set of challenges, especially in tasks of this nature, where devising methods that ensure minimal human intervention and safety becomes of crucial importance.
|
2309.09512 | Extrinsic nonlinear Kerr rotation in topological materials under a
magnetic field | Topological properties in quantum materials are often governed by symmetry
and tuned by crystal structure and external fields, and hence
symmetry-sensitive nonlinear optical measurements in a magnetic field are a
valuable probe. Here we report nonlinear magneto-optical second harmonic
generation (SHG) studies of non-magnetic topological materials including
bilayer WTe2, monolayer WSe2 and bulk TaAs. The polarization-resolved patterns
of optical SHG under magnetic field show nonlinear Kerr rotation in these
time-reversal symmetric materials. For materials with three-fold rotational
symmetric lattice structure, the SHG polarization pattern rotates just slightly
in a magnetic field, whereas in those with mirror or two-fold rotational
symmetry the SHG polarization pattern rotates greatly and distorts. These
different magneto-SHG characters can be understood by considering the
superposition of the magnetic field-induced time-noninvariant nonlinear optical
tensor and the crystal-structure-based time-invariant counterpart. The
situation is further clarified by scrutinizing the Faraday rotation, whose
subtle interplay with crystal symmetry accounts for the diverse behavior of the
extrinsic nonlinear Kerr rotation in different materials. Our work illustrates
the application of magneto-SHG techniques to directly probe nontrivial
topological properties, and underlines the importance of minimizing extrinsic
nonlinear Kerr rotation in polarization-resolved magneto-optical studies. | Shuang Wu, Zaiyao Fei, Zeyuan Sun, Yangfan Yi, Wei Xia, Dayu Yan, Yanfeng Guo, Youguo Shi, Jiaqiang Yan, David H. Cobden, Wei-Tao Liu, Xiaodong Xu, Shiwei Wu | 2023-09-18T06:50:49Z | http://arxiv.org/abs/2309.09512v1 | # Extrinsic nonlinear Kerr rotation in topological materials under a magnetic field
###### Abstract
We study the \({}^{1}\) State Key Laboratory of Surface Physics, Key Laboratory of Micro and Nano Photonic Structures (MOE), and Department of Physics, Fudan University, Shanghai 200433, China.
\({}^{2}\) Department of Physics, University of Washington, Seattle, Washington 98195, USA
\({}^{3}\) School of Physical Science and Technology, and ShanghaiTech Laboratory for Topological Physics, ShanghaiTech University, Shanghai 201210, China
\({}^{4}\) Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
\({}^{5}\) Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee, 37831, USA
\({}^{6}\) Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
\({}^{7}\) Shanghai Qi Zhi Institute, Shanghai 200232, China
\({}^{8}\) Institute for Nanoelectronic Devices and Quantum Computing, and Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
\({}^{9}\) Shanghai Research Center for Quantum Sciences, Shanghai 201315, China
* Corresponding emails: [email protected] |
2309.12654 | Some extreme value theory for $θ$-expansions | The main aim of this paper is to develop extreme value theory for
$\theta$-expansions. We get the limit distribution of the largest value of
$\theta$-continued fraction mixing stationary stochastic process and some
related results. These are analogous to J.Galambos and W.Philipp theorems for
the regular continued fractions. We also have to note that a Borel-Bernstein
type theorem plays an important role. | Gabriela Ileana Sebe, Dan Lascu | 2023-09-22T06:51:31Z | http://arxiv.org/abs/2309.12654v1 | # Some extreme value theory for \(\theta\)-expansions
###### Abstract
The main aim of this paper is to develop extreme value theory for \(\theta\)-expansions. We get the limit distribution of the largest value of \(\theta\)-continued fraction mixing stationary stochastic process and some related results. These are analogous to J.Galambos and W.Philipp theorems for the regular continued fractions. We also have to note that a Borel-Bernstein type theorem plays an important role.
keywords: \(\theta\)-expansions, Borel-Bernstein type theorem, extreme value theory, Frechet law, \(\psi\)-mixing.
## 1 Introduction
The investigation initiated by Bhattacharya and Goswami [2] in random number generation, led to concept of continued fraction expansion of a number in terms of an irrational \(\theta\in(0,1)\). This new expansion of positive reals, different from the regular continued fraction (RCF)- expansion is called \(\theta\)_-expansion_. We mention that the case \(\theta=1\) refers to RCF- expansions.
For a fixed \(\theta\in(0,1)\), Chakraborty and Rao [4] have considered a generalization of the Gauss map \(T_{\theta}:[0,\theta]\rightarrow[0,\theta]\),
\[T_{\theta}(x):=\left\{\begin{array}{ll}\frac{1}{x}-\theta\left[ \frac{1}{x\theta}\right]&\mbox{if $x\in(0,\theta]$,}\\ \\ 0&\mbox{if $x=0$.}\end{array}\right. \tag{1.1}\]
Here \(\left\lfloor\cdot\right\rfloor\) stands for integer part. Then every \(x\in(0,\theta)\) can be expanded into a finite or infinite \(\theta\)-expansion
\[x=\frac{1}{a_{1}\theta+\frac{1}{a_{2}\theta+\frac{1}{a_{3}\theta+\ddots}}}=:[a _{1}\theta,a_{2}\theta,a_{3}\theta,\ldots], \tag{1.2}\]
where
\[a_{1}=a_{1}(x):=\left\{\begin{array}{ll}\left\lfloor\frac{1}{x\theta}\right\rfloor &\mbox{if $x\neq 0$,}\\ \infty&\mbox{if $x=0$}\end{array}\right.\]
and
\[a_{n}=a_{n}(x):=a_{1}\left(T_{\theta}^{n-1}(x)\right),\quad n\in\mathbb{N}_{+}: =\left\{1,2,3,\ldots\right\},\]
with \(T_{\theta}^{0}(x)=x\). The positive integers \(a_{n}\in\mathbb{N}_{+}\) are called the _digits_ or _partial quotients_ of \(x\).
Let \(\mathcal{B}_{[0,\theta]}\) denote the \(\sigma\)-algebra of all Borel subsets of \([0,\theta]\). It is obvious that the digits \(a_{n}\), \(n\in\mathbb{N}_{+}\), are random variables which are defined almost surely on \(([0,\theta],\mathcal{B}_{[0,\theta]})\) with respect to any probability measure on \(\mathcal{B}_{[0,\theta]}\) that assigns probability \(0\) to the set of rationals in \([0,\theta]\). An example of such a measure is Lebesgue measure \(\lambda_{\theta}\) on \([0,\theta]\).
It was shown in [4, 11] that this expansion has many of the usual properties of RCFs. A natural question is whether the dynamical system given by the transformation \(T_{\theta}\) admits an absolutely continuous invariant probability like the Gauss measure in the case \(\theta=1\). Chakraborty and Rao [4] have identified that for certain values of \(\theta\) (for example, if \(\theta^{2}=\frac{1}{m}\), \(m\in\mathbb{N}_{+}\)) the invariant measure for the transformation \(T_{\theta}\) as
\[\mathrm{d}\gamma_{\theta}:=\frac{1}{\log\left(1+\theta^{2}\right)}\frac{ \theta\,\mathrm{d}x}{1+\theta x}. \tag{1.3}\]
Moreover, if \(\theta^{2}=\frac{1}{m}\), \(m\in\mathbb{N}_{+}\), \([a_{1}\theta,a_{2}\theta,a_{3}\theta,\ldots]\) is the \(\theta\)-expansion of any \(x\in(0,\theta)\) if and only if the following conditions hold:
1. \(a_{n}\geq m\) for any \(n\in\mathbb{N}_{+}\)
2. in the case when \(x\) has a finite expansion, i.e., \(x=[a_{1}\theta,a_{2}\theta,\ldots,a_{n}\theta]\), then \(a_{n}\geq m+1\).
It was proved in [4] that the dynamical system \(([0,\theta],T_{\theta})\) is ergodic and the measure \(\gamma_{\theta}\) is invariant under \(T_{\theta}\), that is, \(\gamma_{\theta}(A)=\gamma_{\theta}(T_{\theta}^{-1}(A))\) for any \(A\in\mathcal{B}_{[0,\theta]}\). Therefore, \((a_{n})_{n\in\mathbb{N}_{+}}\) is a strictly stationary sequence on \(([0,\theta],\mathcal{B}_{[0,\theta]},\gamma_{\theta})\).
For more results about \(\theta\)-expansions see [5, 10, 11, 12] and references therein.
Every irrational \(x\in(0,\theta)\setminus\mathbb{Q}=:\Omega\) has an infinite \(\theta\)-expansion. Note that for all \(n\in\mathbb{N}_{+}\), \(a_{n}(x)\geq m\) and \(T_{\theta}^{n}([a_{1}\theta,a_{2}\theta,\ldots])=[a_{n+1}\theta,a_{n+2}\theta,\ldots]\).
For all \(n\in\mathbb{N}_{+}\), we call the finite truncation of (1.2)
\[\frac{p_{n}(x)}{q_{n}(x)}=[a_{1}(x)\theta,a_{2}(x)\theta,\ldots,a_{n}(x)\theta]\]
the _\(n\)-th convergent_ of the \(\theta\)-expansion of \(x\).
For every infinite \(\theta\)-expansion \([a_{1}\theta,a_{2}\theta,\ldots]\) the sequences \(\{p_{n}\}_{n\geq-1}\) and \(\{q_{n}\}_{n\geq-1}\) can be obtained by the following recursive relations
\[p_{n}(x) = a_{n}(x)\theta p_{n-1}(x)+p_{n-2}(x), \tag{1.4}\] \[q_{n}(x) = a_{n}(x)\theta q_{n-1}(x)+q_{n-2}(x), \tag{1.5}\]
with \(p_{-1}(x):=1\), \(p_{0}(x):=0\), \(q_{-1}(x):=0\) and \(q_{0}(x):=1\). Then by induction, we have
\[p_{n-1}(x)q_{n}(x)-p_{n}(x)q_{n-1}(x)=(-1)^{n},\quad n\in\mathbb{N}. \tag{1.6}\]
By using (1.4) and (1.5), we can verify that
\[x=\frac{p_{n}(x)+T_{\theta}^{n}(x)p_{n-1}(x)}{q_{n}(x)+T_{\theta}^{n}(x)q_{n-1}(x )},\quad n\geq 1. \tag{1.7}\]
Using (1.6) and (1.7) we obtain
\[\left|x-\frac{p_{n}(x)}{q_{n}(x)}\right|=\frac{1}{q_{n}(x)\left(\left(T_{ \theta}^{n}(x)\right)^{-1}q_{n}(x)+q_{n-1}(x)\right)},\quad n\geq 1. \tag{1.8}\]
Since \(a_{n+1}(x)\theta\leq\left(T_{\theta}^{n}(x)\right)^{-1}\leq(a_{n+1}(x)+1)\theta\), using (1.4) and (1.5) in (1.8) we get
\[\frac{1}{q_{n}(x)(q_{n+1}(x)+\theta q_{n}(x))}\leq\left|x-\frac{p_{n}(x)}{q_{n }(x)}\right|\leq\frac{1}{q_{n}(x)q_{n+1}(x)},\quad n\geq 1. \tag{1.9}\]
From (1.5), we have that \(q_{n}(x)\geq\theta\), \(n\in\mathbb{N}_{+}\). Further, also from (1.5) and by induction we have that
\[q_{n}(x)\geq\left|\frac{n}{2}\right|\theta^{2}. \tag{1.10}\]
Finally, from (1.9) and (1.10) it follows that \([a_{1}(x)\theta,a_{2}(x)\theta,\ldots,a_{n}(x)\theta]\to x\) as \(n\to\infty\). Relation (1.9) means that the degree of accuracy of this approximation depends on the growth rate of partial quotients.
In the case of RCFs, Borel [3] and Bernstein [1] gave a result called _Borel-Bernstein theorem_ or "\(0-1\)_" law_ on describing the growth rate of partial quotients in the sense of Lebesgue measure. Our first result, Theorem 3.1 is an analogy of Borel-Bernstein theorem for \(\theta\)-expansions. We also show in Section 5 that the Borel-Bernstein type theorem plays an important role in the case of \(\theta\)-expansions.
In Sections 4 and 5 we state some results concerning extreme value theory for \(\theta\)-expansions. These results are new in the sense that they appear not to have been stated elsewhere before.
Extreme value theory for RCF digits first emerged in the 1970's. The results of Galambos [6; 7] concerning the maximal RCF digit have been improved by Philipp [9] to give a complete answer to a conjecture of Erdos.
In Section 4 we derive a Frechet law concerning the partial maxima of the growth rate of the digit sequence. Theorems 4.5 and 4.6 extend previous work of Galambos [6] and Philipp [9] on the asymptotic behavior of the largest digit of RCF-expansions. To get these results we mention that we need the \(\theta\)-continued fraction mixing property, and we also need a condition on the speed of the convergence of mixing.
In Section 5 we give some iterated logarithm results (Theorem 5.2 and Corollary 5.4) for the largest digit of \(\theta\)-expansions.
## 2 Preliminaries
Let us fix \(\theta^{2}=1/m\), \(m\in\mathbb{N}_{+}\). Putting \(\mathbb{N}_{m}:=\{m,m+1,\ldots\}\), \(m\in\mathbb{N}_{+}\), the partial quotients \(a_{n}\), \(n\in\mathbb{N}_{+}\), take positive integer values in \(\mathbb{N}_{m}\).
We now introduce a partition of the interval \([0,\theta]\) which is natural to the \(\theta\)-expansions. Such a partition is generated by the cylinders of rank \(n\). For any \(n\in\mathbb{N}_{+}\) and \(i^{(n)}=(i_{1},\ldots,i_{n})\in\mathbb{N}_{m}^{n}\), define the \(n\)_-th cylinder_ of \(\theta\)-expansion by
\[C\left(i^{(n)}\right)=\{x\in\Omega:a_{k}(x)=i_{k}\text{ for }k=1,\ldots,n\},\]
where \(C\left(i^{(0)}\right)=[0,\theta]\). For any \(i\in\mathbb{N}_{m}\), we have
\[C\left(i\right)=\left\{x\in\Omega:a_{1}(x)=i\right\}=\left(\frac{1}{(i+1)\theta },\frac{1}{i\theta}\right). \tag{2.1}\]
If \(n\in\mathbb{N}_{+}\) and \(i_{n}\in\mathbb{N}_{m}\), we will write
\[C(a_{1},\ldots,a_{n})=C\left(i^{(n)}\right).\]
Next we recall some known results for later use. From the definition of \(T_{\theta}\) and (1.7) we have for any \(n\in\mathbb{N}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{N}_{m}^{n}\),
\[C(a_{1},\ldots,a_{n})=\left\{\begin{array}{ll}\left[\frac{p_{n}}{q_{n}}, \frac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}}\right)&\text{if $n$ is even,}\\ \\ \left(\frac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}},\frac{p_{n}}{q_{n}} \right)&\text{if $n$ is odd.}\end{array}\right. \tag{2.2}\]
Using (1.6) we get
\[\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n}\right)\right)=\frac{1}{\theta }\left|\frac{p_{n}}{q_{n}}-\frac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}} \right|=\frac{1}{q_{n}(q_{n}+\theta q_{n-1})}=\frac{1}{q_{n}^{2}(1+\theta s_{ n})}, \tag{2.3}\]
where \(s_{n}=\frac{q_{n-1}}{q_{n}}\), \(n\in\mathbb{N}_{+}\) and \(s_{0}=0\). Since \(s_{n}\in[0,\theta]\), it follows from (2.3) that
\[\frac{1}{2q_{n}^{2}}\leq\frac{1}{(1+\theta^{2})q_{n}^{2}}\leq\lambda_{\theta} \left(C\left(a_{1},\ldots,a_{n}\right)\right)\leq\frac{1}{q_{n}^{2}}. \tag{2.4}\]
It is of interest to calculate the approximate proportion of the \(n\)-th level cylinder set \(C\left(a_{1},\ldots,a_{n}\right)\) that is occupied by each of the \((n+1)\)-th level cylinder sets \(C\left(a_{1},\ldots,a_{n},k\right)\). Notice that the endpoints of the interval \(C\left(a_{1},\ldots,a_{n},k\right)\) are \(\frac{p_{n+1}}{q_{n+1}}\) and \(\frac{p_{n+1}+\theta p_{n}}{q_{n+1}+\theta q_{n}}\) with \(p_{n+1}=k\theta p_{n}+p_{n-1}\) and \(q_{n+1}=k\theta q_{n}+q_{n-1}\). So we obtain
\[\frac{p_{n+1}}{q_{n+1}}=\frac{k\theta p_{n}+p_{n-1}}{k\theta q_{n}+q_{n-1}}, \quad\frac{p_{n+1}+\theta p_{n}}{q_{n+1}+\theta q_{n}}=\frac{(k+1)\theta p_{n} +p_{n-1}}{(k+1)\theta q_{n}+q_{n-1}}.\]
Directly computation yields that
\[\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n},k\right)\right)=\frac{1}{(k \theta q_{n}+q_{n-1})((k+1)\theta q_{n}+q_{n-1})}=\frac{1}{k^{2}q_{n}^{2}\left( \theta+\frac{s_{n}}{k}\right)\left(\left(1+\frac{1}{k}\right)\theta+\frac{s_ {n}}{k}\right)}.\]
By (2.3) it follows that
\[\frac{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n},k\right) \right)}{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n}\right)\right)} = \frac{q_{n}^{2}(1+\theta s_{n})}{k^{2}q_{n}^{2}\left(\theta+\frac{s _{n}}{k}\right)\left(\left(1+\frac{1}{k}\right)\theta+\frac{s_{n}}{k}\right)}\] \[= \frac{1+\theta s_{n}}{k^{2}\left(\theta+\frac{s_{n}}{k}\right) \left(\left(1+\frac{1}{k}\right)\theta+\frac{s_{n}}{k}\right)}.\]
Since
\[\theta^{2}<\left(\theta+\frac{s_{n}}{k}\right)\left(\left(1+\frac{1}{k}\right) \theta+\frac{s_{n}}{k}\right)<\theta^{2}\left(1+\frac{1}{k}\right)\left(1+\frac {2}{k}\right)<6\theta^{2}<6,\]
for \(k\geq m\), we find that
\[\frac{1}{6k^{2}}<\frac{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n},k \right)\right)}{\lambda_{\theta}\left(C\left(a_{1},\ldots,a_{n}\right)\right)}< \frac{1+\theta^{2}}{k^{2}\theta^{2}}=\frac{m+1}{k^{2}}. \tag{2.5}\]
Next, we give some lemmas for later use.
**Lemma 2.1**.: _Let \(k\geq m\), then_
\[\frac{1}{6k^{2}}<\lambda_{\theta}\left(\bigcup_{a_{1},\ldots,a_{n}\geq m}C \left(a_{1},\ldots,a_{n},k\right)\right)<\frac{m+1}{k^{2}}.\]
Proof.: Using (2.5), since \(\sum_{a_{1},\ldots,a_{n}\geq m}\lambda_{\theta}\left(C\left(a_{1}, \ldots,a_{n}\right)\right)=1\), the proof is completed.
We recall the following well-known and extremely useful result.
**Lemma 2.2** (Borel-Cantelli).: _Let \((X,\mathcal{X},\mu)\) be a measurable space. Let \(\{C_{n}\}_{n\geq 1}\) be a sequence of \(\mathcal{X}\)-measurable sets and define the \(\lim-sup\) set_
\[C_{\infty}=\limsup_{n\to\infty}C_{n}=\bigcap_{n\geq 1}\bigcup_{m\geq n}C_{m}= \{x\in X:x\in C_{n}\text{ for infinitely many }n\in\mathbb{N}_{+}\}.\]
_Then, if \(\sum_{n\geq 1}\mu(C_{n})<\infty\), we have that \(\mu(C_{\infty})=0\)._
## 3 A Borel-Bernstein-type theorem
Our first main result is the following theorem.
**Theorem 3.1** (Borel-Bernstein-type theorem).: _Let \(\varphi:\mathbb{N}_{+}\to(0,+\infty)\) be a function and_
\[A_{\varphi}=\{x\in\Omega:a_{n}(x)>\varphi(n)\text{ for infinitely many }n\in\mathbb{N}_{+}\}.\]
_Then we have_
\[\lambda_{\theta}(A_{\varphi})=\left\{\begin{array}{ll}0&\text{ if }\sum_{n \geq 1}\frac{1}{\varphi(n)}<\infty,\\ \\ 1&\text{ if }\sum_{n\geq 1}\frac{1}{\varphi(n)}=\infty.\end{array}\right.\]
Proof.: Let \(A_{n}=\{x\in\Omega:a_{n}(x)>\varphi(n)\}\), thus \(A_{\varphi}=\limsup_{n\to\infty}A_{n}=\bigcap_{j\geq 1}\bigcup_{n\geq j}A_{n}\). By Lemma 2.1, one has
\[\lambda_{\theta}(A_{n})=\lambda_{\theta}\left(\bigcup_{a_{1},\ldots,a_{n-1} \geq m}\bigcup_{k>\varphi(n)}C\left(a_{1},\ldots,a_{n-1},k\right)\right)<\sum _{k\geq\lfloor\varphi(n)\rfloor+1}\frac{m+1}{k^{2}}<\frac{m+1}{\lfloor\varphi( n)\rfloor}<\frac{2(m+1)}{\varphi(n)}.\]
Thus the convergence of Borel-Cantelli lemma enables us to conclude that \(\lambda_{\theta}(A_{\varphi})=0\) when \(\sum_{n\geq 1}\frac{1}{\varphi(n)}<\infty\).
Suppose now \(\sum_{n\geq 1}\frac{1}{\varphi(n)}=\infty\). Notice that
\[\lambda_{\theta}(A_{\varphi})=\lambda_{\theta}\left(\bigcap_{j\geq 1}\bigcup_{n \geq j}A_{n}\right)=1\iff\lambda_{\theta}(A_{\varphi}^{c})=\lambda_{\theta} \left(\bigcup_{j\geq 1}\bigcap_{n\geq j}A_{n}^{c}\right)=0.\]
Thus we need only to show \(\lambda_{\theta}\left(\bigcap_{n\geq j}A_{n}^{c}\right)=0\), where
\[A_{n}^{c}=\{x\in\Omega:a_{n}(x)\leq\varphi(n)\}.\]
Let
\[B_{j,\ell}=\bigcap_{j<n\leq j+\ell}A_{n}^{c}.\]
Then
\[\lambda_{\theta}\left(\bigcap_{n\geq j+1}A_{n}^{c}\right)=\lim_{\ell\to\infty }\lambda_{\theta}(B_{j,\ell}).\]
By the definition of \(B_{j,\ell}\), we have
\[B_{j,1}=\bigcup_{a_{1},\ldots,a_{j}\geq m}\bigcup_{m\leq k\leq\varphi(j+1)}C \left(a_{1},\ldots,a_{j},k\right)=\bigcup_{a_{1},\ldots,a_{j}\geq m}\left(C(a_ {1},\ldots,a_{j})\setminus\bigcup_{k>\varphi(j+1)}C\left(a_{1},\ldots,a_{j},k \right)\right).\]
By Lemma 2.1, we obtain that
\[\sum_{k\geq\lfloor\varphi(j+1)\rfloor+1}\frac{\lambda_{\theta}\left(C\left(a _{1},\ldots,a_{j},k\right)\right)}{\lambda_{\theta}\left(C\left(a_{1},\ldots,a _{j}\right)\right)}>\sum_{k\geq\lfloor\varphi(j+1)\rfloor+1}\frac{1}{6k^{2}}> \sum_{k\geq\lfloor\varphi(j+1)\rfloor+1}\frac{1}{6k(k+1)}>\frac{1}{6(\varphi(j +1)+1)}.\]
Hence
\[\lambda_{\theta}(B_{j,1})\leq\sum_{a_{1},\ldots,a_{j}\geq m}\lambda_{\theta} \left(C\left(a_{1},\ldots,a_{j}\right)\right)\cdot\left(1-\frac{1}{6(\varphi( j+1)+1)}\right)=1-\frac{1}{6(\varphi(j+1)+1)}.\]
Since
\[B_{j,\ell+1} = \{x\in\Omega:a_{j+1}\leq\varphi(j+1),\ldots,a_{j+\ell+1}\leq \varphi(j+\ell+1)\}\] \[= \{x\in B_{j,\ell}:a_{j+\ell+1}\leq\varphi(j+\ell+1)\},\] \[\lambda_{\theta}(B_{j,\ell+1})\leq\left(1-\frac{1}{6(\varphi(j+ \ell+1)+1)}\right)\lambda_{\theta}(B_{j,\ell}).\]
By induction,
\[\lambda_{\theta}(B_{j,\ell})\leq\prod_{i=1}^{\ell}\left(1-\frac{1}{6(\varphi( j+i)+1)}\right)\leq\prod_{i=1}^{\ell}\left(1-\frac{1}{12\varphi(j+i)}\right) \leq\exp\left(-\sum_{i=1}^{\ell}\frac{1}{12\varphi(j+i)}\right).\]
Here we use the fact that \(1-x\leq\exp(-x)\) if \(x\geq 0\). One has \(\lim\limits_{\ell\to\infty}\lambda_{\theta}(B_{j,\ell})=0\) by the fact that \(\sum\limits_{n\geq 1}\dfrac{1}{\varphi(n)}=\infty\). Therefore, \(\lambda_{\theta}\left(A_{\varphi}^{c}\right)=0\), which complete the proof.
**Corollary 3.2**.: _For \(\lambda_{\theta}\)- a.e. \(x\in[0,\theta]\), we have that_
\[a_{n}(x)>n\log n\;\;\text{for infinitely many}\;\;n\in\mathbb{N}_{+},\]
_whereas for every \(\varepsilon>0\), we have that_
\[a_{n}(x)<n(\log n)^{1+\varepsilon}\;\;\text{for all sufficiently large}\;\;n\in \mathbb{N}_{+}.\]
Proof.: This follows from Theorem 3.1 immediately on the observation that
\[\sum\limits_{n\geq 1}\dfrac{1}{n\log n}=\infty\;\;\text{and}\;\sum\limits_{n \geq 1}\dfrac{1}{n(\log n)^{1+\varepsilon}}<\infty\]
for all \(\varepsilon>0\).
## 4 The asymptotic behavior of the largest digit in \(\theta\)-expansions
In the sequel we shall use the fundamental facts of the metric theory of \(\theta\)-expansions. One of these facts is that the stochastic process arising from \(\theta\)-expansion digits has the \(\psi\)-mixing property.
**Definition 4.1** (\(\psi\)-mixing).: _Let \((X,X,\mu)\) denote a probability space and let \(\xi_{j}:X\to\mathbb{R}\) denote a stationary sequence of random variables. For any \(k\in\mathbb{N}_{+}\) let \(\mathcal{B}_{1}^{k}=\sigma(\xi_{1},\ldots,\xi_{k})\) and \(\mathcal{B}_{k}^{\infty}=\sigma(\xi_{k},\xi_{k+1},\ldots)\) denote the \(\sigma\)-algebras generated by the random variables \(\xi_{1},\ldots,\xi_{k}\), respectively \(\xi_{k},\xi_{k+1},\ldots\). Then \(\{\xi_{j}\}\) is said to be \(\psi\)-mixing if for any sets \(A\in\mathcal{B}_{1}^{k}\) and \(B\in\mathcal{B}_{k+n}^{\infty}\) we have_
\[|\mu(A\cap B)-\mu(A)\mu(B)|\leq\psi(n)\mu(A)\mu(B),\]
_where \(\psi:\mathbb{N}_{+}\to\mathbb{R}\) is a function for which \(\psi(n)\to 0\) as \(n\to\infty\)._
The random variables \(a_{n}(x)\), \(n\in\mathbb{N}_{+}\), form a stationary sequence due to the invariance of the measure \(\gamma_{\theta}\) with respect to \(T_{\theta}\).
**Lemma 4.2**.: _For all \(n\in\mathbb{N}_{+}\) and \(w\in\mathbb{N}_{m}\),_
\[\gamma_{\theta}(a_{n}(x)\geq w)=\dfrac{1}{\log\left(1+\theta^{2}\right)}\log \left(1+\dfrac{1}{w}\right)=:p_{\theta}(w).\]
Proof.: Using (2.1) and the fact that \((a_{n})_{n\in\mathbb{N}_{+}}\) is a strictly stationary sequence as the transformation \(T_{\theta}\) is measure-preserving with respect to \(\gamma_{\theta}\), we have
\[\gamma_{\theta}(a_{n}(x)\geq w) = \gamma_{\theta}(a_{1}(x)\geq w)=\dfrac{1}{\log\left(1+\theta^{2} \right)}\sum\limits_{k\geq w}\int_{\frac{1}{(n+1)}}^{\frac{1}{n}}\dfrac{\theta \mathrm{d}x}{1+\theta x}\] \[= \dfrac{1}{\log\left(1+\theta^{2}\right)}\sum\limits_{k\geq w} \left(\log\left(1+\dfrac{1}{k}\right)-\log\left(1+\dfrac{1}{k+1}\right)\right)\] \[= \dfrac{1}{\log\left(1+\theta^{2}\right)}\log\left(1+\dfrac{1}{w }\right).\]
**Lemma 4.3**.: _Let \(\{f_{\theta,n}(x)\}_{n\geq 1}\) be a sequence of functions \(f_{\theta,n}\in C^{2}[0,\theta]\) defined recursively by_
\[f_{\theta,n+1}(x)=\sum_{i\geq m}\left(f_{\theta,n}\left(\frac{1}{\theta i} \right)-f_{\theta,n}\left(\frac{1}{x+\theta i}\right)\right),\quad n\in\mathbb{N}\]
_with \(f_{\theta,0}(0)=0\) and \(f_{\theta,0}(\theta)=1\)._
_Set_
\[g_{\theta,n}(x)=(\theta x+1)f_{\theta,n}^{\prime}(x),\,\,x\in[0,\theta]. \tag{4.1}\]
_Then_
\[\left\|g_{\theta,n}^{\prime}\right\|\leq q_{\theta}^{n}\cdot\left\|g_{\theta, 0}^{\prime}\right\|,\,n\in\mathbb{N}_{+} \tag{4.2}\]
_with_
\[q_{\theta}:=m\left(\sum_{i\geq m}\left(\frac{m}{i^{3}(i+1)}+\frac{i+1-m}{i(i+ 1)^{3}}\right)\right)<1. \tag{4.3}\]
_Here \(\left\|\cdot\right\|\) stands for the supremum norm._
Proof.: Since
\[g_{\theta,n+1}(x)=\sum_{i\geq m}P_{\theta,i}(x)g_{\theta,n}\left(u_{\theta,i}( x)\right),\]
where
\[P_{\theta,i}(x):=\frac{\theta x+1}{(x+\theta i)(x+\theta(i+1))}=\frac{1}{ \theta}\left[\frac{1-\theta^{2}i}{x+\theta i}-\frac{1-\theta^{2}(i+1)}{x+ \theta(i+1)}\right]\]
and
\[u_{\theta,i}(x):=\frac{1}{x+\theta i}\]
we have
\[g_{\theta,n+1}^{\prime}(x)=\sum_{i\geq m}\frac{1-\theta^{2}(i+1)}{(x+\theta i )(x+\theta(i+1))^{3}}f_{\theta,n}^{\prime}(\alpha_{\theta,i})-\sum_{i\geq m} \frac{P_{\theta,i}(x)}{(x+\theta i)^{2}}f_{\theta,n}^{\prime}(u_{\theta,i}(x)),\]
where \(u_{\theta,i+1}(x)<\alpha_{\theta,i}<u_{\theta,i}(x)\). Then
\[\left\|g_{\theta,n+1}^{\prime}\right\|\leq\left\|g_{\theta,n}^{\prime}\right\| \cdot\max_{x\in[0,\theta]}\left(\sum_{i\geq m}\frac{\theta^{2}(i+1)-1}{(x+ \theta i)(x+\theta(i+1))^{3}}+\sum_{i\geq m}\frac{P_{\theta,i}(x)}{(x+\theta i )^{2}}\right).\]
Using that
\[\frac{\theta^{2}(i+1)-1}{(x+\theta i)(x+\theta(i+1))^{3}}\leq m^{2}\frac{ \theta^{2}(i+1)-1}{i(i+1)^{3}}\]
and
\[\sum_{i\geq m}\frac{P_{\theta,i}(x)}{(x+\theta i)^{2}}\leq m^{2}\sum_{i\geq m} \frac{1}{i^{3}(i+1)},\]
then we get (4.2) and (4.3).
The random variables \(a_{n}(x)\), \(n\in\mathbb{N}_{+}\), are not independent. However, they satisfy a \(\psi\)-mixing condition. Actually, the sequence \(\{a_{n}\}_{n\in\mathbb{N}_{+}}\) is \(\psi\)-mixing under \(\gamma_{\theta}\) and the function \(\psi\) vanishes at an exponential rate.
**Lemma 4.4**.: _For any sets \(A\in\mathcal{B}_{1}^{k}=\sigma(a_{1},\ldots,a_{k})\) and \(B\in\mathcal{B}_{k+n}^{\infty}=\sigma(a_{k+n},a_{k+n+1},\ldots)\) we have_
\[|\gamma_{\theta}(A\cap B)-\gamma_{\theta}(A)\gamma_{\theta}(B)|\leq K_{\theta} q_{\theta}^{n}\gamma_{\theta}(A)\gamma_{\theta}(B), \tag{4.4}\]
_where \(0<q_{\theta}<1\) and \(K_{\theta}\) is a positive constant._
Proof.: Let \(C_{k}\) be the \(k\)-th cylinder with endpoints \(\frac{p_{k}}{q_{k}}\) and \(\frac{p_{k}+\theta p_{k-1}}{q_{k}+\theta q_{k-1}}\). Let
\[f_{\theta,n}(x)=\gamma_{\theta}\left(T_{\theta}^{n+k}(\omega)<x\,|\,C_{k} \right):=\frac{\gamma_{\theta}\left(\left(T_{\theta}^{n+k}(\omega)<x\right) \cap C_{k}\right)}{\gamma_{\theta}(C_{k})}\]
be the conditional distribution function of \(T_{\theta}^{n+k}(\omega)\) given \(C_{k}\). Obviously \(\left(\left(T_{\theta}^{k}(\omega)<x\right)\cap C_{k}\right)\) is an interval with endpoints \(\frac{p_{k}}{q_{k}}\) and \(\frac{p_{k}+x\theta p_{k-1}}{q_{k}+x\theta q_{k-1}}\). Thus we obtain
\[f_{\theta,0}(x)=\frac{1}{\gamma_{\theta}(C_{k})}\frac{(-1)^{k}}{\log\left(1+ \theta^{2}\right)}\left(\log\left(1+\theta\frac{p_{k}+x\theta p_{k-1}}{q_{k}+ x\theta q_{k-1}}\right)-\log\left(1+\theta\frac{p_{k}}{q_{k}}\right)\right).\]
If \(g_{\theta,n}\) is defined as in Lemma 4.3 let us put
\[K_{\theta}:=\sup_{x\in[0,\theta]}\left|g_{\theta,0}^{\prime}(x)\right|=\left| \left|g_{\theta,0}^{\prime}\right|\right|.\]
Hence by (4.1) and (4.2)
\[\left|f_{\theta,n}^{\prime}(x)-\frac{\theta}{(\theta x+1)\log\left(1+\theta^ {2}\right)}\right|\leq\frac{\left|g_{\theta,n}(x)-g_{\theta,n}(0)\right|}{ \theta x+1}+\frac{\left|g_{\theta,n}(0)-\frac{\theta}{\log\left(1+\theta^{2} \right)}\right|}{\theta x+1} \tag{4.5}\]
and
\[\left|g_{\theta,n}(x)-g_{\theta,n}(0)\right|=\left|\int_{0}^{x}g_{\theta,n}^{ \prime}(t)\mathrm{d}t\right|\leq\left|\left|g_{\theta,n}^{\prime}\right| \right|\cdot x\leq K_{\theta}q_{\theta}^{n}x.\]
Also, for some \(0<v_{\theta}<1\)
\[1 = f_{\theta,n}(\theta)=\int_{0}^{\theta}f_{\theta,n}^{\prime}(t) \mathrm{d}t=\int_{0}^{\theta}\frac{g_{\theta,n}(t)}{\theta t+1}\mathrm{d}t\] \[= g_{\theta,n}(0)\frac{\log\left(1+\theta^{2}\right)}{\theta}+ \int_{0}^{\theta}\frac{g_{\theta,n}(t)-g_{\theta,n}(0)}{\theta t+1}\mathrm{d}t\] \[= g_{\theta,n}(0)\frac{\log\left(1+\theta^{2}\right)}{\theta}+v_{ \theta}K_{\theta}q_{\theta}^{n}\left(1-\frac{\log\left(1+\theta^{2}\right)}{ \theta^{2}}\right)\]
and so
\[g_{\theta,n}(0)=\frac{\theta}{\log\left(1+\theta^{2}\right)}+v_{\theta}\frac{ K_{\theta}q_{\theta}^{n}}{\theta}\left(1-\frac{\theta^{2}}{\log\left(1+\theta^{2} \right)}\right).\]
Thus, from (4.5)
\[\left|f_{\theta,n}^{\prime}(x)-\frac{\theta}{(\theta x+1)\log\left( 1+\theta^{2}\right)}\right| \leq \frac{K_{\theta}q_{\theta}^{n}}{\theta x+1}+\frac{K_{\theta}q_{ \theta}^{n}}{\theta}\left(\frac{\theta^{2}}{\log\left(1+\theta^{2}\right)}-1 \right)\frac{1}{\theta x+1} \tag{4.6}\] \[< \frac{K_{\theta}q_{\theta}^{n}}{\theta x+1}\frac{\theta}{\log\left( 1+\theta^{2}\right)}. \tag{4.7}\]
Integrating (4.6) over \(F\) we obtain
\[\left|\gamma_{\theta}\left(T_{\theta}^{-(n+k)}(F)\mid C_{k}\ \right)-\gamma_{ \theta}(F)\right|\leq K_{\theta}q_{\theta}^{n}\gamma_{\theta}(F).\]
Since each \(A\in\mathcal{B}_{1}^{k}\) is a countable union of disjoint \(C_{k}\) we obtain (4.4) and thus the proof is complete.
Define
\[L_{N}:=\max_{1\leq n\leq N}a_{n}(x),\quad x\in\Omega.\]
In the sequel we discuss the asymptotic behavior of the largest digit \(L_{N}\).
**Theorem 4.5**.: _For any \(y>0\), we have_
\[\lim_{N\to\infty}\gamma_{\theta}\left(x\in\Omega:L_{N}(x)<\frac{ Ny}{\log\left(1+\theta^{2}\right)}\right)=\exp\left(-\frac{1}{y}\right). \tag{4.8}\]
Proof.: 1_st step_. Let
\[A_{n}=\{x\in\Omega:a_{n}(x)\geq w\},\]
which means
\[\bigcap_{n=1}^{N}A_{n}^{C}=\{x\in\Omega:L_{N}(x)<w\}=:B_{N}.\]
Given that \(B_{N}\) represents the event where none of the \(A_{n}\) occurs, the Poincare identity reveals that
\[\gamma_{\theta}(B_{N})=\sum_{k=0}^{N}(-1)^{k}S_{k} \tag{4.9}\]
with
\[S_{0}=1,\,S_{k}=\sum_{1\leq n_{1}<n_{2}<\ldots<n_{k}\leq N}\gamma_{\theta} \left(A_{n_{1}}\cap\ldots\cap A_{n_{k}}\right).\]
Thus, equation (4.9) provides an expression for the distribution function of \(L_{N}\). By selecting \(w=\left\lfloor\frac{Ny}{\log(1+\theta^{2})}\right\rfloor\), we demonstrate that the tail \(\sum_{k\geq Z}S_{k}\), where \(Z\) is a sufficiently large but fixed value, can be made arbitrarily small.
By repeatedly applying equation (4.4) and referring to Lemma 4.2, we obtain that
\[\gamma_{\theta}\left(A_{n_{1}}\cap\ldots\cap A_{n_{k}}\right)\leq(1+K_{\theta })^{k-1}\gamma_{\theta}(A_{n_{1}})\gamma_{\theta}(A_{n_{2}})\cdot\ldots\cdot \gamma_{\theta}(A_{n_{k}})<(1+K_{\theta})^{k}p_{\theta}^{k}(w). \tag{4.10}\]
For sufficiently large values of \(N\), we obtain
\[w=\left\lfloor\frac{Ny}{\log\left(1+\theta^{2}\right)}\right\rfloor\geq\frac {1}{2}\frac{Ny}{\log\left(1+\theta^{2}\right)}, \tag{4.11}\]
whence
\[p_{\theta}(w)\leq\frac{1}{w\log\left(1+\theta^{2}\right)}\leq\frac{2}{Ny}.\]
Therefore
\[\sum_{k\geq Z}S_{k} < \sum_{k\geq Z}\frac{N!}{(N-k)!k!}(1+K_{\theta})^{k}p_{\theta}^{k}(w) \leq\sum_{k\geq Z}\frac{N!}{(N-k)!k!}N^{-k}\left(\frac{2(1+K_{\theta})}{y} \right)^{k} \tag{4.12}\] \[\leq \sum_{k\geq Z}\frac{1}{k!}\left(\frac{2(1+K_{\theta})}{y}\right)^ {k}<\frac{1}{Z!}\left(\frac{4K_{\theta}}{y}\right)^{Z}\exp\left(\frac{4K_{ \theta}}{y}\right)\leq\varepsilon\]
as the value of \(Z\) is increased sufficiently.
\(2nd\) _step._ Let's divide \(S_{k}\) into two separate terms when considering \(k<Z\):
\[S_{k}=S_{k}^{\ast}+R_{k}. \tag{4.13}\]
Here, \(S_{k}^{\ast}\) represents the sum over all \(n_{1}<n_{2}<\ldots<n_{k}\) with \(ni+1-n_{i}\geq t\) (\(i\geq 1\)), where \(t\) is a positive integer determined as follows. Let \(\eta>0\) be an arbitrary real number, and let \(t\) be the smallest integer \(n\) such that \(K_{\theta}q_{\theta}^{n}<\eta\). Next, we proceed to estimate \(S_{k}^{\ast}\). Using repeated applications of (4.4) and another reference to Lemma 4.2, we find that for any term belonging to \(S_{k}^{\ast}\),
\[\gamma_{\theta}\left(A_{n_{1}}\cap\ldots\cap A_{n_{k}}\right)=p_{\theta}^{k}( w)\left(1+\mathcal{O}_{k}(\eta)\right),\,n_{i}+t\leq n_{i+1}.\]
In the estimation of \(S_{k}^{\ast}\), the constant involved in \(\mathcal{O}_{k}(\eta)\) depends exclusively on \(k\). Hence
\[S_{k}^{\ast}=\frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!}p_{\theta}^{k}(w) \left(1+\mathcal{O}_{k}(\eta)\right). \tag{4.14}\]
In order to estimate \(R_{k}\) in (4.13), it's important to observe that the overall estimation (4.10) is applicable to each of its individual terms and that its number of terms is
\[\frac{N!}{(N-k)!k!}-\frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!}=o\left(N^{k} \right).\]
We thus have
\[R_{k}=o\left(N^{k}p_{\theta}^{k}(w)\right). \tag{4.15}\]
Considering that
\[p_{\theta}(w)=p_{\theta}\left(\left|\frac{Ny}{\log\left(1+\theta^{2}\right)} \right|\right)=\left(1+\mathcal{O}(N^{-1})\right)\frac{1}{Ny}\]
by (4.13), (4.14) and (4.15) we can deduce that
\[S_{k}=\left(1+\mathcal{O}_{k}(\eta)\right)\frac{y^{-k}}{k!}+o_{N}(1), \tag{4.16}\]
where \(k\) is fixed and \(o_{N}(1)\to 0\) as \(N\to\infty\).
\(3rd\) _step._ Finally, by (4.9), (4.12) and (4.16), we establish that for any given positive integer \(Z\)
\[\gamma_{\theta}\left(L_{N}<\left|\frac{Ny}{\log\left(1+\theta^{2}\right)} \right|\right)=\sum_{k=0}^{Z-1}(-1)^{k}\left(1+\mathcal{O}_{k}(\eta)\right) \frac{y^{-k}}{k!}+o_{N}(1)+o_{Z}(1),\]
the last term approaches \(0\) as \(Z\to\infty\). Letting \(N\to\infty\), and then \(\eta\to 0\), we deduce that for any positive integer \(Z\)
\[\lim_{N\to\infty}\gamma_{\theta}\left(L_{N}<\left|\frac{Ny}{\log\left(1+ \theta^{2}\right)}\right|\right)=\sum_{k=0}^{Z-1}(-1)^{k}\frac{y^{-k}}{k!}+o_{Z }(1).\]
Given that the left-hand side remains independent of \(Z\), letting \(Z\to\infty\), we achieve the limit relation (4.8) while taking into account that the argument \(w\) in \(\{L_{N}<w\}\) is considered to be an integer in \(\mathbb{N}_{m}\). Since
\[\gamma_{\theta}\left(L_{N}<\left\lfloor\frac{Ny}{\log\left(1+\theta^{2}\right)} \right\rfloor\right)\leq\gamma_{\theta}\left(L_{N}<\frac{Ny}{\log\left(1+ \theta^{2}\right)}\right)\leq\gamma_{\theta}\left(L_{N}<\left\lfloor\frac{Ny}{ \log\left(1+\theta^{2}\right)}\right\rfloor+1\right)\]
the proof is complete.
**Theorem 4.6**.: _For any \(0<\delta<1\) and \(y>0\), we have_
\[\gamma_{\theta}\left(x\in\Omega:L_{N}(x)<\frac{Ny}{\log\left(1+\theta^{2} \right)}\right)=\exp\left(-\frac{1}{y}\right)+\mathcal{O}\left(\exp\left(-( \log N)\right)^{\delta}\right) \tag{4.17}\]
_where the constant involved in \(\mathcal{O}\) depends exclusively on \(\delta\)._
Proof.: We follow the proof of Theorem 4.5 with a particular choice of \(Z\) and \(t\). We choose \(Z=\left\lfloor\frac{\log N}{\log\log N}\right\rfloor\). For a specific \(0<\delta<1\), we choose \(\delta<\delta^{\prime}<1\), \(\varepsilon>0\) and \(\zeta>0\) so that \(1-\delta^{\prime}>\varepsilon+\zeta\). We assume that \(y\geq(\log N)^{-\delta}\). Applying Stirling's formula, we derive
\[\frac{1}{Z!}\ \frac{1}{\sqrt{2\pi}\,Z^{2+1/2}\exp(-Z)}\asymp\frac{\exp\left( \frac{\log N}{\log\log N}\right)}{\left(\frac{\log N}{\log\log N}\right)^{ \frac{\log N}{\log\log N}+\frac{1}{2}}}.\]
For \(N\) sufficiently large,
\[e\leq\left(\frac{\log N}{\log\log N}\right)^{\zeta}.\]
Thus, we obtain
\[\exp\left(\frac{\log N}{\log\log N}\right)\leq\left(\left(\frac{\log N}{\log \log N}\right)^{\zeta}\right)^{\frac{\log N}{\log\log N}}<N^{\zeta}.\]
Furthermore, for \(N\) sufficiently large
\[\left(\frac{\log N}{\log\log N}\right)^{\frac{\log N}{\log\log N}}>N^{1- \varepsilon}.\]
As a result, we obtain
\[\frac{1}{Z!}\ll\frac{1}{N^{1-\varepsilon-\zeta}}. \tag{4.18}\]
Alternatively, it is obvious that
\[\left(\frac{4K_{\theta}}{y}\right)^{Z}\exp\left(\frac{4K_{\theta}}{y}\right) \leq\left(4K_{\theta}\left(\log N\right)^{\delta}\right)^{\frac{\log N}{\log \log N}}\exp\left(4K_{\theta}\left(\log N\right)^{\delta}\right)<N^{\delta^{ \prime}}\]
when \(N\) is sufficiently large.. Finally, using (4.12), we obtain
\[\sum_{k\geq Z}S_{k}\ll N^{-a} \tag{4.19}\]
for \(0<a<1-\varepsilon-\zeta-\delta^{\prime}\).
Setting \(t=\left\lfloor(\log N)^{2}\right\rfloor\), we estimate \(R_{k}\) for \(k<Z\)
\[R_{k} \leq \left(\frac{N!}{(N-k)!k!}-\frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)! k!}\right)(1+K_{\theta})^{k}p_{\theta}^{k}(w)\ll tZN^{k-1}(2K_{\theta})^{k}p_{ \theta}^{k}(w)\] \[\ll (\log N)^{3}\frac{1}{\log\log N}N^{k-1}(2K_{\theta})^{k}p_{ \theta}^{k}(w)\ll(\log N)^{3}N^{k-1}\left(\frac{4K_{\theta}}{Ny}\right)^{k}.\]
In cases where \(\frac{4K_{\theta}}{y}>1\), we proceed to evaluate, for \(N\) sufficiently large,
\[R_{k}\leq\frac{1}{N^{1-\varepsilon}}\left(\frac{4K_{\theta}}{y}\right)^{Z} \leq\frac{1}{N^{1-\varepsilon}}\left(4K_{\theta}(\log N)^{\delta}\right) \frac{\log N}{\log\log N}<N^{-a} \tag{4.20}\]
for \(0<a<1-\varepsilon-\delta\). When \(\frac{4K_{\theta}}{y}<1\), the estimation becomes relatively straightforward. We can select the value of \(a\) to be the same as that in equation (4.19).
As a result, the number of terms in \(S_{k}^{*}\), \(k<Z\), is given by \(\frac{N!}{(N-k)!k!}+\mathcal{O}\left(N^{k-1}(\log N)^{3}\right)\). We have
\[S_{k}^{*} = \left(\frac{N!}{(N-k)!k!}+\mathcal{O}\left(N^{k-1}(\log N)^{3} \right)\right)\left(1+\mathcal{O}\left(N^{-1}\right)\right)^{k}(Ny)^{-k}\left( 1+\beta K_{\theta}q_{\theta}^{(\log N)^{2}}\right)^{k} \tag{4.21}\] \[= \frac{y^{-k}}{k!}+\mathcal{O}\left(N^{-a}\right)\]
where \(|\beta|\leq 1\). Subsequently, using (4.20) and (4.21), we deduce that \(S_{k}=\frac{y^{-k}}{k!}+\mathcal{O}\left(N^{-a}\right)\). In conclusion, using (4.19), we obtain
\[\gamma_{\theta}(B_{N})=\sum_{k=0}^{Z-1}\left((-1)^{k}\frac{y^{-k}}{k!}+ \mathcal{O}\left(N^{-a}\right)\right)+\mathcal{O}\left(N^{-a}\right)=\sum_{k= 0}^{Z-1}(-1)^{k}\frac{y^{-k}}{k!}+\mathcal{O}\left(N^{-a^{\prime}}\right)= \exp\left(-\frac{1}{y}\right)+\mathcal{O}\left(N^{-a^{\prime}}\right),\]
with \(0<a^{\prime}<a\).
## 5 Some iterated logarithm results
We begin with the following quantitative Borel-Cantelli lemma.
**Lemma 5.1**.: _[_8_]_ _Let \(\{E_{N}\}_{n\geq 1}\) be a sequence of measurable sets in a probability space \((X,X,\mu)\). Denote by \(A(N,x)\) the number of integers \(n\leq N\) such that \(x\in E_{n}\), i.e., \(A(N,x)=\sum_{n\leq N}\chi_{E_{n}}(x)\), where \(\chi_{E_{n}}\) is the characteristic function of \(E_{n}\). Define_
\[\varphi(N):=\sum_{n\leq N}\mu(E_{n}).\]
_Suppose that there exists a convergent series \(\sum_{k\geq 1}c_{k}\) with \(c_{k}\geq 0\) such that for all integers \(n>\ell\) we have_
\[\mu(E_{n}\cap E_{\ell})\leq\mu(E_{n})\mu(E_{\ell})+\mu(E_{n})c_{n-\ell}. \tag{5.1}\]
_Then for any \(\varepsilon>0\)_
\[A(N,x)=\varphi(N)+\mathcal{O}\left(\varphi^{1/2}(N)\log^{3/2+\varepsilon} \varphi(N)\right)\quad\mu\text{-a.s.} \tag{5.2}\]
**Theorem 5.2**.: _For a.e. \(x\in[0,\theta]\) we have_
\[\liminf_{N\to\infty}\frac{L_{N}(x)\log\log N}{N}=\frac{1}{\log\left(1+\theta^{ 2}\right)}.\]
Proof.: Since for all \(A\in\mathcal{B}_{[0,\theta]}\)
\[\frac{\lambda_{\theta}(A)}{\left(1+\theta^{2}\right)\log\left(1+\theta^{2} \right)}\leq\gamma_{\theta}(A)\leq\frac{\lambda_{\theta}(A)}{\log\left(1+ \theta^{2}\right)}\]
the measures \(\gamma_{\theta}\) and \(\lambda_{\theta}\) are equivalent. Hence we make the proof for all \(x\) except a set of \(\gamma_{\theta}\)-measure \(0\). Consider integers \(M\) and \(N\) with \(M,N\geq 0\). Define
\[L(M,N,x):=\max_{M<n\leq M+N}a_{n}(x),\]
\[\varphi(n):=\frac{n}{\log\log n\log\left(1+\theta^{2}\right)}\]
and
\[E_{k}:=\left(x\in\Omega:L\left(k^{2k},k^{2(k+1)},x\right)\leq\varphi\left(k^{ 2(k+1)}\right)\right).\]
Due to the \(T_{\theta}\)-invariance of \(\gamma_{\theta}\), we can deduce from Theorem 4.6 that, for any integer \(k\geq k_{0}\),
\[\gamma_{\theta}(E_{k}) = \gamma_{\theta}\left(x\in\Omega:L\left(k^{2k},k^{2(k+1)},x\right) \leq\varphi\left(k^{2(k+1)}\right)\right) \tag{5.3}\] \[= \gamma_{\theta}\left(x\in\Omega:L\left(0,k^{2(k+1)},x\right) \leq\varphi\left(k^{2(k+1)}\right)\right)\] \[\geq \frac{1}{2}\exp\left(-\log\log k^{2(k+1)}\right)\geq\frac{1}{8}( k\log k)^{-1}.\]
Obviously \(E_{k}\) depends only on \(a_{n}(x)\) with \(k^{2k}<n\leq k^{2(k+1)}+k^{2k}\). Consequently, according to Lemma 4.4, we can establish that for any pair of integers \(k<\ell\)
\[|\gamma_{\theta}(E_{k}\cap E_{\ell})-\gamma_{\theta}(E_{k})\gamma_{\theta}(E_ {\ell})|\leq K_{\theta}q_{\theta}^{\ell-k}\gamma_{\theta}(E_{k})\gamma_{\theta }(E_{\ell}),\]
since \((k+1)^{2(k+1)}-k^{2(k+1)}-k^{2k}\geq 1\).
As a result, the implication from Lemma 5.1 suggests that \(x\in E_{k}\) for infinitely many \(k\) (a.e. \(x\)) given that \(\varphi(N)\gg\log\log N\) according to (5.3).
On the other hand, by Lemma 4.2
\[\gamma_{\theta}(F_{k}) := \gamma_{\theta}\left(x\in\Omega:L\left(0,k^{2k},x\right)\geq \varphi\left(k^{2(k+1)}\right)\right)\] \[\leq \sum_{n\leq k^{2k}}\gamma_{\theta}\left(x\in\Omega:a_{n}(x)\geq \varphi\left(k^{2(k+1)}\right)\right)=k^{2k}p_{\theta}\left(\varphi\left(k^{2(k +1)}\right)\right)\] \[\leq k^{2k}\log\log k^{2(k+1)}\cdot k^{-2(k+1)}\leq k^{-3/2}.\]
Therefore, according to Lemma 5.1\(x\in F_{k}\) only for finitely many \(k\) (a.e. \(x\)). Thus
\[x\in E_{k}\setminus F_{k}=\left(x\in\Omega:L\left(0,k^{2k}+k^{2(k+1)},x\right) \leq\varphi\left(k^{2(k+1)}\right)\right)\]
for infinitely many \(k\) (a.e. \(x\)), which implies that \(L\left(0,k^{2(k+1)},x\right)\leq\varphi\left(k^{2(k+1)}\right)\) holds for infinitely many \(k\) (a.e. \(x\)). Hence,
\[\liminf_{N\to\infty}\frac{L_{N}(x)\log\log N}{N}\leq\frac{1}{\log\left(1+ \theta^{2}\right)}\;\;\text{a.e.} \tag{5.4}\]
Now, we proceed to prove the converse inequality. Let \(b>1\). Again by Theorem 4.6
\[\gamma_{\theta}(G_{k}) := \gamma_{\theta}\left(x\in\Omega:L\left(0,\left\lfloor b^{k} \right\rfloor,x\right)\leq b^{-2}\varphi\left(\left\lfloor b^{k+1}\right\rfloor \right)\right)\] \[\ll \exp\left(-b\log\log b^{k}\right)\ll k^{-b}.\]
By Lemma 5.1, since \(\sum k^{-b}<\infty\), it follows that \(x\in G_{k}\) only for finitely many \(k\) (a.e. \(x\)), which means that
\[L\left(0,\left\lfloor b^{k}\right\rfloor,x\right)>b^{-2}\varphi\left(\left \lfloor b^{k+1}\right\rfloor\right)\]
holds for all \(k\geq k_{0}(x,b)\). For a given value of \(N\) such that \(\left\lfloor b^{k}\right\rfloor\leq N<b^{k+1}\) where \(k\geq k_{0}(x,b)\) since \(L\left(0,\left\lfloor b^{k}\right\rfloor,x\right)\leq L_{N}(x)\) and \(\varphi(N)\leq\varphi\left(\left\lfloor b^{k+1}\right\rfloor\right)\) we conclude that
\[L_{N}(x)>b^{-2}\varphi(N)\;\;\text{a.e.}\;x.\]
Since this holds for any \(b>1\) we obtain
\[\liminf_{N\to\infty}\frac{L_{N}(x)\log\log N}{N}\geq\frac{1}{\log\left(1+ \theta^{2}\right)}\;\;\text{a.e.}\]
By (5.4) the proof is completed.
There is no analogous result for Theorem 5.2 with a finite nonzero superior limit. This follows from the following theorem.
**Theorem 5.3**.: _Let \(\{\varphi(n)\}_{n\geq 1}\) be a positive nondecreasing sequence. Then for a.e. \(x\in[0,\theta]\)_
\[L_{N}(x)>\varphi(N) \tag{5.5}\]
_has finitely many or infinitely many solutions in integers \(N\) according as the series_
\[\sum_{n\geq 1}\frac{1}{\varphi(n)} \tag{5.6}\]
_converges or diverges._
Proof.: Indeed, if \(\sup\varphi(n)<\infty\), then the divergence of (5.5) implies, according to Theorem 3.1, that \(a_{n}(x)>\varphi(n)\) holds for infinitely many \(n\) (a.e. \(x\)).
On the other hand, when \(\varphi(n)\nearrow\infty\), the behavior of (5.5) is determined by whether the inequality \(a_{n}(x)>\varphi(n)\) holds finitely or infinitely often. This, in turn, leads to the conclusion that, by Theorem 3.1, this behavior holds for a.e. \(x\) based on whether the series (5.6) converges or diverges.
**Corollary 5.4**.: _Let \(\{\varphi(n)\}_{n\geq 1}\) be as in Theorem 5.3. Then for a.e. \(x\in[0,\theta]\)_
\[\limsup_{N\to\infty}\frac{L_{N}(x)}{\varphi(N)} \tag{5.7}\]
_is either \(0\) or \(\infty\)._
Proof.: We distinguish the cases where the series (5.6) converges or diverges. If the series (5.6) converges we choose a monotone sequence \(\{\alpha_{n}\}_{n\geq 1}\) tending to \(\infty\) but so slowly that still \(\sum_{n\geq 1}\frac{\alpha_{n}}{\varphi(n)}<\infty\). Therefore in accordance with Theorem 5.3, the inequality \(L_{N}(x)>\frac{\varphi(N)}{\alpha_{N}}\) holds only for finitely many \(N\) (a.e. \(x\)). Hence (5.7) vanishes for a.e. \(x\).
If the series (5.6) diverges, we consider a monotone sequence \(\{\alpha_{n}\}_{n\geq 1}\) tending to \(0\) such that \(\sum_{n\geq 1}\frac{\alpha_{n}}{\varphi(n)}=\infty\). Hence, \(L_{N}(x)>\frac{\varphi(N)}{\alpha_{N}}\) holds for infinitely many \(N\) (a.e. \(x\)) and thus (5.7) is infinite for a.e. \(x\).
|
2303.17777 | $α$ + $^{92}$Zr cluster structure in $^{96}$Mo | In the evaluation of the half-life of the neutrinoless double-$\beta$ decay
($0\nu\beta\beta$) of a doubly closed-subshell nucleus $^{96}$Zr, the structure
of the nucleus $^{96}$Mo is essentially important. The $\alpha$-clustering
aspects of $^{96}$Mo are investigated for the first time. By studying the
nuclear rainbows in $\alpha$ scattering from $^{92}$Zr at high energies and the
characteristic structure of the excitation functions at the extreme backward
angle at the low-energy region, the interaction potential between the $\alpha$
particle and the $^{92}$Zr nucleus is determined well in the double folding
model. The validity of the double folding model was reinforced by studying
$\alpha$ scattering from neighboring nuclei $^{90}$Zr, $^{91}$Zr, and
$^{94}$Zr. The double-folding-model calculations reproduced well all the
observed angular distributions over a wide range of incident energies and the
characteristic excitation functions. By using the obtained potential the
$\alpha$ +$^{92}$Zr cluster structure of $^{96}$Mo is investigated in the
spirit of a unified description of scattering and structure. The existence of
the second-higher nodal band states with the $\alpha$+ $^{92}$Zr cluster
structure, in which two more nodes are excited in the relative motion compared
with the ground band, is demonstrated. The calculation reproduces well the
ground-band states of $^{96}$Mo in agreement with experiment. The experimental
$B(E2)$ value of the transition in the ground band is also reproduced well. The
effect of $\alpha$ clustering in $^{96}$Mo on the the half-life of the
$0\nu\beta\beta$ double-$\beta$ decay of $^{96}$Zr is discussed. | S. Ohkubo, Y. Hirabayashi | 2023-03-31T02:49:02Z | http://arxiv.org/abs/2303.17777v1 | # \(\alpha\) + \({}^{92}\)Zr cluster structure in \({}^{96}\)Mo
###### Abstract
In the evaluation of the half-life of the neutrinoless double-\(\beta\) decay (\(0\nu\beta\beta\)) of a doubly closed-subshell nucleus \({}^{96}\)Zr, the structure of the nucleus \({}^{96}\)Mo is essentially important. The \(\alpha\)-clustering aspects of \({}^{96}\)Mo are investigated for the first time. By studying the nuclear rainbows in \(\alpha\) scattering from \({}^{92}\)Zr at high energies and the characteristic structure of the excitation functions at the extreme backward angle at the low-energy region, the interaction potential between the \(\alpha\) particle and the \({}^{92}\)Zr nucleus is determined well in the double folding model. The validity of the double folding model was reinforced by studying \(\alpha\) scattering from neighboring nuclei \({}^{90}\)Zr, \({}^{91}\)Zr, and \({}^{94}\)Zr. The double-folding-model calculations reproduced well all the observed angular distributions over a wide range of incident energies and the characteristic excitation functions. By using the obtained potential the \(\alpha\) +\({}^{92}\)Zr cluster structure of \({}^{96}\)Mo is investigated in the spirit of a unified description of scattering and structure. The existence of the second-higher nodal band states with the \(\alpha\)+\({}^{92}\)Zr cluster structure, in which two more nodes are excited in the relative motion compared with the ground band, is demonstrated. The calculation reproduces well the ground-band states of \({}^{96}\)Mo in agreement with experiment. The experimental \(B(E2)\) value of the transition in the ground band is also reproduced well. The effect of \(\alpha\) clustering in \({}^{96}\)Mo on the the half-life of the \(0\nu\beta\beta\) double-\(\beta\) decay of \({}^{96}\)Zr is discussed.
## I Introduction
The observation of neutrinoless double-\(\beta\) decay, \(0\nu\beta\beta\), which violates lepton number conservation, is expected to serve to shed light on the fundamental questions beyond the standard model, such as determining the nature of neutrino, Dirac, or Majorana particles. Since supersymmetric particles have not been observed in Large Hadron Collider experiments, much more attention than ever has been paid to study of \(0\nu\beta\beta\)[1; 2; 3]. The inverse half-life of \(0\nu\beta\beta\) is given by \([T_{1/2}^{0\nu}]^{-1}=G_{0\nu}|<m_{\beta\beta}>/m_{e}|^{2}\,|M^{0\nu}|^{2}\), where \(<m_{\beta\beta}>\) is the effective Majorana neutrino mass, \(m_{e}\) is the electron mass, and \(G_{0\nu}\sim 10^{-14}\) yr\({}^{-1}\) is a phase-space factor. For the evaluation of the nuclear matrix element (NME) of the transition \(M^{0\nu}\)[4; 5; 6; 7; 8; 9], it is essential to know the ground-state wave functions of the initial- and final- state nuclei.
Up to now theoretical \(0\nu\beta\beta\) decay study has been done based on the mean-field theory such as the shell-model [10; 11], _ab initio_ calculations [12; 13], quasiparticle random phase approximation (QRPA) [14; 15; 16], the projected Hartree-Fock Bogoliubov model (PHFB) [17; 18; 19], the generator coordinate method (GCM) [20; 21; 22; 23], the energy density functional (EDF) [24; 25] and the interacting boson model (IBM) [26; 27]. No attention has been paid to the \(\alpha\) cluster structure viewpoint until the study of \({}^{48}\)Ca decay to \({}^{48}\)Ti [28]. This is probably because it has been believed intuitively that strong spin-orbit force would break \(\alpha\) clustering and partly because experimental data of \(\alpha\)-transfer reactions such as (\({}^{6}\)Li,d), (d,\({}^{6}\)Li), and \((p,\rho\alpha)\) are scarce.
\(\alpha\) cluster structure has been established in the light mass region [29; 30] and medium-weight mass region around \({}^{44}\)Ti[31; 32; 33; 34] and recently extended to the \({}^{52}\)Ti region [28; 35; 36; 37; 38]. In a previous paper [28], paying attention to the \(0\nu\beta\beta\) decay of \({}^{48}\)Ca to \({}^{48}\)Ti, one of the present authors (S.O.) has shown that the ground \(0^{+}\) state of \({}^{48}\)Ti has \(\alpha\)-clustering aspects, which significantly quenches the half-life than the conventional shell model calculations in which excitations to the higher several major shells are not considered.
In the \(0\nu\beta\beta\) of the parent nucleus \({}^{96}\)Zr [39], the structure of the ground state of the daughter nucleus \({}^{96}\)Mo, whose \(\alpha\) threshold energy 2.76 MeV is small, is crucial in evaluating the NME of \(0\nu\beta\beta\) decay transitions. The persistency of \(\alpha\) clustering in the heavier mass region around \(A=90\) has been explored for the typical nucleus \({}^{94}\)Mo with two protons and two neutrons outside the closed shell core \({}^{99}\)Zr in Refs. [40; 41; 42]. Later \(\alpha\) cluster model study [43; 44; 45] also supports \(\alpha\) clustering in the \({}^{94}\)Mo region. Recent observations of \(\alpha\) particles in the pick-up reactions \((p,p\alpha)\) in the Sn isotopes [46] seem to reinforce the importance of \(\alpha\) clustering in the heavy mass region.
The ground state of \({}^{96}\)Zr is spherical being a doubly closed-subshell nucleus and is analog to the doubly closed shell \({}^{16}\)O in light nuclei [47]. The first excited \(0^{+}\) state is considered to be a four-particle four-hole excited-state analog to the mysterious \(0^{+}\) state at 6.05 MeV in \({}^{16}\)O. Recent large-scale shell-model calculations for the Zr isotopes [48] have confirmed that the ground state of \({}^{96}\)Zr is spherical and that shape transition to deformed occurs at \({}^{100}\)Zr as the number of the excess neutrons increases. As for the structure of \({}^{96}\)Mo, studies including \(2\nu\beta\beta\) decay using QRPA [49], phase transition from spherical \({}^{92}\)Mo to deformed toward \({}^{104}\)Mo [50], octupole collective motion [51] and shell-model structure [52] have been reported. \(0\nu\beta\beta\) of \({}^{96}\)Zr has been investigated using many models, which includes the QRPA, PHFB, EDF, IBM, and GCM [19]. However, no study of \(0\nu\beta\beta\) of \({}^{96}\)Zr from the viewpoint of \(\alpha\) cluster of \({}^{96}\)Mo has been challenged.
The purpose of this paper is to show that \(\alpha\) clustering
persists in the ground state of \({}^{96}\)Mo by studying bound states and scattering for the \(\alpha\)+\({}^{92}\)Zr system in a unified way and that the half-life of \(0\nu\beta\beta\) of \({}^{96}\)Zr is quenched significantly. For this, by using a double folding model the interaction potential between \(\alpha\) particle and \({}^{92}\)Zr is determined by analyzing angular distributions of nuclear rainbows in \(\alpha\)+\({}^{92}\)Zr scattering at high energies, backward angle anomaly (BAA) or anomalous large angle scattering (ALAS) at lower energies. The potential reproduces well the excitation functions with a characteristic dip at the extreme backward angles near 180\({}^{\circ}\) in the lower-energy region systematically not only for \(\alpha\)+\({}^{92}\)Zr scattering but also for \(\alpha\)+\({}^{90,91,94}\)Zr scattering. The existence of the second-higher nodal band states with the \(\alpha\)+\({}^{92}\)Zr cluster structure, which is responsible for the emergence of the characteristic dip in the back-angle excitation function, is shown for the first time. The ground band of \({}^{96}\)Mo is understood well in the \(\alpha\)-cluster model study using the obtained double folding potential. \(\alpha\) clustering of \({}^{96}\)Mo gives significant effect to quench the \(0\nu\beta\beta\) decay of \({}^{96}\)Zr.
The paper is organized as follows. In Sec. II the double folding model is presented. Section III is devoted to the analysis of \(\alpha\)+\({}^{92}\)Zr scattering over a wide range of incident energies by using a double folding model. To confirm the validity of the obtained interaction potential for \(\alpha\)+\({}^{92}\)Zr, \(\alpha\) scattering from neighboring nuclei \({}^{90,91,94}\)Zr is also investigated. In Sec. IV the origin of the characteristic dip in the back-angle excitation function in \(\alpha\)+\({}^{92}\)Zr scattering is investigated from the viewpoint of persistent existence of the \(\alpha\) cluster structure at the highly excited energies in \({}^{96}\)Mo. In Sec. V, \(\alpha\)+\({}^{92}\)Zr clustering of \({}^{96}\)Mo is studied and discussions of \(\alpha\) clustering on the \(0\nu\beta\beta\) decay of \({}^{96}\)Zr is given. A summary is given in Sec. VI.
## II Double folding model
We study \(\alpha\) scattering from \({}^{92}\)Zr and neighboring nuclei \({}^{90,91,94}\)Zr with a double folding model using a density-dependent nucleon-nucleon force. The double folding potential is calculated as follows:
\[V({\bf r})=\int\rho_{00}^{({}^{4}{\rm He})}({\bf r}_{1})\ \rho_{00}^{({ }^{2}{\rm r})}({\bf r}_{2})\] \[\times v_{NN}(E,\rho,{\bf r}_{1}+{\bf r}-{\bf r}_{2})\ d{\bf r}_{1 }d{\bf r}_{2}, \tag{1}\]
where \(\rho_{00}^{({}^{4}{\rm He})}({\bf r}_{1})\) and \(\rho_{00}^{({}^{2}{\rm r})}({\bf r}_{2})\) represent the nucleon density of the ground states of \({}^{4}\)He and Zr, respectively, which are obtained by the convolution of the proton size from the charge density distribution taken from Ref.[53]. For the effective interaction \(v_{\rm NN}\) we use the density(\(\rho\))-dependent M3Y interaction [54]. In the calculations we introduce the normalization factor \(N_{R}\) for the real double folding potential [55; 56]. The Coulomb folding potential is calculated similarly by the folding prescription in Eq. (1). An imaginary potential with a Woods-Saxon volume-type form factor (nondeformed) is introduced phenomenologically to take into account the effect of absorption due to other channels.
## III Analysis of alpha scattering from \({}^{92}\)Zr and \({}^{90,91,94}\)Zr
In exploring the \(\alpha\) cluster structure in the medium-weight mass region where the level density is high, a unified description of \(\alpha\) scattering including rainbow scattering, prerainbows and BAA (ALAS), and the \(\alpha\) cluster structure in the bound and quasibound energy region has been very powerful [28; 34; 36; 57; 58].
The angular distributions in \(\alpha\) scattering from \({}^{92}\)Zr have been measured systematically at \(E_{\alpha}\)=40, 65, 90 and 120 MeV in Ref. [59] and 35.4 MeV in Ref. [60]. The interaction potential can be uniquely determined from the analysis of the angular distributions in the rainbow energy region, which show the Airy minimum in the lit side of the nuclear rainbow followed by the falloff of the cross sections corresponding to the darside of the nuclear rainbow.
We started to analyze the angular distribution at the highest energy \(E_{\alpha}\)= 120 MeV to fit to the experimental angular distribution by introducing \(N_{R}\)=1.26 and a phenomenological imaginary potential with a strength parameter \(W\)=18.5 MeV, a radius parameter \(R_{W}\) =7.1 fm and a diffuseness parameter \(a_{W}\) =0.6 fm. Then by keeping the fixed \(N_{R}\) =1.26 for 90 and 65 MeV and with a slightly reduced value \(N_{R}\) =1.22 for 40 and 35.4 MeV, all the angular distributions are easily reproduced by the calculations with a small adjustment to reduce the strength and/or diffuseness parameters of the imaginary potential with decreasing incident energies. The calculated angular distributions are in good agreement with the experimental data as displayed in Fig. 1. In Table 1 the values of the volume integral per nucleon pair \(J_{V}\) and the rms radius \(\sqrt{<r_{V}^{2}>}\) of the double folding potential, and the parameters of the imaginary potential together with the volume integral per nucleon pair \(J_{W}\) and the rms radius \(\sqrt{<r_{W}^{2}>}\) are listed. The energy dependence of the volume integrals \(J_{V}\) is reasonable, which is consistent with the previous calculations for \(\alpha\)+\({}^{92}\)Zr scattering in Refs. [40; 59].
In order to see the contributions of the refractive far-side scattering, the calculated angular distributions are decomposed into the farside and nearside components [61]. In Fig. 1, we see that the falloff of the cross sections in the angular distributions in the intermediate angular region above \(E_{\alpha}\)=65 MeV, which is peculiar to nuclear rainbow scattering, are all due to farside scattering. A clear first-order Airy minimum \(A1\) of the nuclear rainbow is seen at \(\theta\) = 50\({}^{\circ}\) at \(E_{\alpha}\)=120 MeV, which shifts backward as the incident energy decreases, at around \(\theta\) = 70\({}^{\circ}\) for \(E_{\alpha}\)=90 MeV and at \(\theta\) = 125\({}^{\circ}\) for \(E_{\alpha}\)=65 MeV. At \(E_{\alpha}\)=40 MeV no Airy minimum is observed. The appearance of the oscillations in the backward angular distributions shows that the nearside contributions are involved since the oscillations are the consequence of interference of the two amplitudes of farside and nearside scattering. This backward rise of the cross sections with the oscillations at \(E_{\alpha}\)=40 MeV is the indication of BAA under
incomplete absorption, which is typically observed and explained in \(\alpha\)+\({}^{16}\)O [62; 63] and \(\alpha\)+\({}^{40}\)Ca scattering [64] in the energy region \(E_{\alpha}\)=20-30 MeV.
In the energy region below \(E_{\alpha}\)=40 MeV the concept of farside and nearside scattering is no more powerful in understanding the characteristic features of the angular distributions. It is useful to understand the characteristics of the angular distributions of BAA in terms of the concept of internal waves and barrier waves [65]. The scattering amplitude \(f(\theta)\) can be decomposed into \(f^{I}(\theta)\), which is due to the internal waves penetrating the barrier deep into the internal region of the potential and \(f^{B}(\theta)\), which is due to the barrier waves reflected at the barrier of the potential in the surface region, \(f(\theta)=f^{I}(\theta)+f^{B}(\theta)\). In the case of incomplete absorption the internal waves, \(f^{I}(\theta)\), carry the information of the internal region of the potential. Unfortunately at the lower energies below 30 MeV no angular distributions have been measured for \(\alpha\)+\({}^{92}\)Zr scattering where the effect of the internal waves is clearly seen [62; 63; 64; 65].
However, we note that the angular distributions in \(\alpha\) scattering from neighboring nuclei \({}^{90}\)Zr and \({}^{91}\)Zr have been measured up to the backward angles at the lower energies \(E_{\alpha}\)=23-25 MeV. In Fig. 2 the angular distributions show a BAA rising toward the extreme backward angles at 21 and 25 MeV. Note that the angular distributions for both \({}^{90}\)Zr and \({}^{91}\)Zr decrease sharply toward 180\({}^{\circ}\) at \(E_{\alpha}\)=23 MeV in the BAA energy region, which is not seen in the typical \(\alpha\)+\({}^{16}\)O [62; 63] and \(\alpha\)+\({}^{40}\)Ca scattering [64]. This characteristic decrease is intriguing because angular distributions at other energies generally increase toward \(\theta=180^{\circ}\), see Fig. 1, as expected from the behavior of the Legendre polynomials whose moduli increases toward \(\theta=180^{\circ}\) at the extreme back angles. In Fig. 2 the angular distributions in \(\alpha\)+ \({}^{90}\)Zr and \(\alpha\)+\({}^{91}\)Zr
Figure 1: (Color online) The angular distributions in \(\alpha\)+\({}^{92}\)Zr scattering at \(E_{\alpha}\)=35.4, 40, 65, 90 and 120 MeV calculated with the optical potential model with the double folding potential (solid lines) are compared with the experimental data (filled circles) [59; 60]. The calculated farside (dotted lines) and nearside (dashed lines) contributions are also indicated.
scattering calculated using the double folding potential derived from Eq. (1) are compared with the experimental data [66]. The potential parameters used are listed in Table 2. The calculations reproduce the experimental angular distributions well. Note that the particular behavior at 23 MeV that decreases sharply toward 180\({}^{\circ}\) is reproduced excellently. This shows that the calculated double folding potentials for \(\alpha\)+ \({}^{90}\)Zr and \(\alpha\)+\({}^{91}\)Zr work very well in this low-energy region, which reinforces the validity of the double folding potential in the \(E_{\alpha}\)=23-MeV to \(E_{\alpha}\)=25-MeV region.
In Fig. 3 the excitation functions at the extreme backward angle \(\theta\)=176.2\({}^{\circ}\) (\(\theta_{Lab}\)=176\({}^{\circ}\)) in \(\alpha\) scattering from \({}^{90}\)Zr, \({}^{91}\)Zr, \({}^{92}\)Zr and \({}^{94}\)Zr calculated using the potentials at \(E_{\alpha}\)= 23 MeV in Table 2 are displayed in comparison with the experimental data. All the calculated excitation functions show a dip and its position shifts to lower energy from \({}^{90}\)Zr to \({}^{94}\)Zr. The position of the dips in the calculated excitation functions for \(\alpha\)+\({}^{90}\)Zr, \(\alpha\)+\({}^{91}\)Zr and \(\alpha\)+\({}^{94}\)Zr agrees with the experimental data excellently. The energy of the observed dips for \({}^{90}\)Zr, \({}^{91}\)Zr and \({}^{94}\)Zr deceases linearly with the mass number \(A\) of the target nucleus, \(E_{\alpha}\)=54.5 - 0.346\(A\), which predicts a dip at \(E_{\alpha}\)=22.7 MeV for \({}^{92}\)Zr. As seen in Fig. 3, the double folding model calculation locates a dip at \(E_{\alpha}\)= 22.7 MeV for \(\alpha\)+\({}^{92}\)Zr, which is in good agreement with the above-predicted energy, 22.7 MeV.
The mechanism explaining why the dip emerges in the excitation function at the extreme backward angle near \(\theta\)=180\({}^{\circ}\), namely why the angular distribution decreases sharply toward \(\theta\)=180\({}^{\circ}\) at a particular energy, has been investigated in detail for the typical \(\alpha\)+\({}^{90}\)Zr system by one of the present authors (S.O.) and his collaborators, see Ref. [41]. The mechanism is understood as follows. The dip appears at the energy where the scattering amplitude \(f(\theta)\) becomes vanishingly small. When \(f^{I}(\theta)\)\(\approx\)\(-f^{B}(\theta)\), the cancellation of the two amplitude occurs, i.e., in the case when \(|f^{I}(\theta)|\approx|f^{B}(\theta)|\) and arg(\(|f^{I}(\theta)\))- arg(\(f^{B}(\theta)\)) \(\approx\)\(k\pi\) where \(k\) is an odd integer. At near \(\theta\)=180\({}^{\circ}\) this condition is satisfied at the energy \(E_{\alpha}\)=22-24 MeV under moderate absorption not only for \(\alpha\)+\({}^{90}\)Zr but also for \(\alpha\)+\({}^{91}\)Zr, \(\alpha\)+\({}^{92}\)Zr and \(\alpha\)+\({}^{94}\)Zr since both the real potential and the imaginary potential change little from that of \(\alpha\)+\({}^{90}\)Zr as seen in Table 2. The good agreement of the calculated excitation functions, especially the energy position and width of the dip for \(\alpha\)+\({}^{91}\)Zr, and \(\alpha\)+\({}^{94}\)Zr with the experimental data, is the natural consequence that their potentials resemble that for \(\alpha\)+\({}^{90}\)Zr. Although no experimental data are available for \(\alpha\)+\({}^{92}\)Zr, the emergence of the dip at the predicted energy in the excitation function would be confirmed in the future experiment. Since the internal waves, which are responsible for the emergence of the dip, are sensitive to the internal region of the real potential, the present good agreement in Fig. 3 shows that the obtained double folding potential is reliable sufficiently in this low-energy region above the Coulomb barrier.
IV Mechanism of the characteristic dip in the back-angle excitation function in \(\alpha\)+\({}^{92}\)Zr scattering
In this section, paying attention to the highly lying excited \(\alpha\) cluster structure in \({}^{96}\)Mo, we investigate how the anomalous dip in the back-angle excitation in \(\alpha\)+\({}^{92}\)Zr
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \(E_{\alpha}\) & \(J_{V}\) & \(\sqrt{<r_{V}^{2}>}\) & \(W\) & \(R_{W}\) & \(a_{W}\) & \(J_{W}\) & \(\sqrt{<r_{W}^{2}>}\) \\ \hline \({}^{90}\)Zr & 21 & 331.5 & 4.96 & 10.0 & 7.55 & 0.40 & 51.5 & 6.03 \\ & 23.4 & 329.5 & 4.96 & 10.0 & 7.55 & 0.43 & 51.7 & 6.06 \\ & 25 & 327.5 & 4.96 & 10.0 & 7.55 & 0.48 & 52.1 & 6.11 \\ \({}^{91}\)Zr & 21 & 333.1 & 5.00 & 10.6 & 7.60 & 0.37 & 54.8 & 6.05 \\ & 23 & 331.7 & 5.00 & 10.2 & 7.60 & 0.41 & 53.0 & 6.08 \\ & 25 & 330.3 & 5.01 & 10.2 & 7.60 & 0.45 & 53.3 & 6.12 \\ \({}^{92}\)Zr & 23 & 329.6 & 4.99 & 10.7 & 7.63 & 0.43 & 55.8 & 6.12 \\ \({}^{94}\)Zr & 23 & 330.0 & 5.02 & 11.8 & 7.70 & 0.48 & 62.3 & 6.23 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The volume integral per nucleon pair \(J_{V}\), rms radius \(\sqrt{<r_{V}^{2}>}\) of the double folding potentials, and the strength \(W\), radius \(R_{W}\), diffuseness \(a_{W}\), volume integral per nucleon pair \(J_{W}\), and rms radius \(\sqrt{<r_{W}^{2}>}\) of the imaginary potentials used in \(\alpha\)+\({}^{90,91,92,94}\)Zr scattering in Fig. 2 and Fig. 3. Energies are in MeV, volume integrals in MeVfm\({}^{3}\), and radii in fm. \(N_{R}\)=1.22 is used for all target nuclei and incident energies.
Figure 3: (Color online) The calculated excitation functions in \(\alpha\) scattering from \({}^{90}\)Zr, \({}^{91}\)Zr, \({}^{92}\)Zr, and \({}^{94}\)Zr at the extreme backward angle \(\theta\)=176.2\({}^{\circ}\) (\(\theta_{Lab}\)=176\({}^{\circ}\)) (solid lines) are compared with the experimental data (filled circles) [66].
scattering in Fig. 3 is created.
For this purpose, in Fig. 4 (a) back-angle excitation functions calculated by reducing gradually the strength of the imaginary potential, \(W\)=3\(W_{0}\)/4, \(W_{0}\)/2, \(W_{0}\)/4, \(W_{0}\)/8, and 0 MeV, are compared with the original one with \(W_{0}=10.7\) in Table 2. For \(W=0\) the peaks in the excitation function at \(E_{\alpha}\)=20.5 and 23 MeV are due to the resonances with the \(\alpha\)+\({}^{92}\)Zr structure. That the \(\alpha\) cluster structure at the highly excitation energies can be seen in the excitation function at the extreme backward angles near 180\({}^{\circ}\) has been already shown for the \(\alpha\)+\({}^{40}\)Ca cluster structure in \({}^{44}\)Ti [67; 68].
In Fig. 4 (b) and (c), the partial wave cross sections of elastic scattering are displayed. Fig. 4 (b) shows that the peaks at \(E_{\alpha}\)=20.5 and 23 MeV are caused by the even \(L\) partial waves and Fig. 4 (c) shows that the odd \(L\) partial waves do not contribute to create the peaks. Thus we find that the peaks at \(E_{\alpha}\)=20.5 and \(E_{\alpha}\)=23 MeV in the excitation function with \(W=0\) are caused by the resonant waves \(L=10\) and \(L=12\), respectively.
To see the details of the resonances responsible for the peaks, in Fig. 5 the phase shifts in \(\alpha\)+\({}^{92}\)Zr scattering calculated by switching off the imaginary potential are displayed. We see that the phase shifts for the even parity and odd parity partial waves show different behavior in the relevant energies, \(E_{\alpha}\)=18 - 26 MeV (center-of-mass energy \(E\)=17.3-24.9 MeV). Although the the phase shifts of the even-parity partial waves, \(L=\)10 and 12, pass through \(\delta_{L}\)=270\({}^{\circ}\) slowly at the resonance energies, those of the odd-parity partial waves, \(L=\)11 - 15, cross \(\delta_{L}\)=90\({}^{\circ}\) sharply at the resonance energies. The narrow odd-parity resonances hardly contribute to the peaks in the excitation functions as seen in Fig. 4 (a) and (c). This is why even-parity waves are dominantly responsible for the peaks and the dip in Fig. 4. The broad resonant nature of the even- parity waves is the consequence that they are the high-lying second-higher nodal \(\alpha\)-cluster resonance states, in which the relative motion is two more excited compared with the lowest Pauli-allowed ground-band states in \({}^{96}\)Mo. The nature of the resonant \(\alpha\)+\({}^{92}\)Zr cluster structure in \({}^{96}\)Mo is discussed in detail in the next section.
Figure 4: (Color online) (a)The excitation functions at the extreme backward angle \(\theta\)=176.2\({}^{\circ}\) in \(\alpha\)+\({}^{92}\)Zr scattering calculated with the reduced strengths of the imaginary potential \(W\)=0 (dotted lines), \(W_{0}\)/8 (dashed lines), \(W_{0}\)/4 (long dashed lines), \(W_{0}\)/2 (dash-dotted lines), and \(3W_{0}\)/4 ( dashed-and-double-dotted lines) are compared with the original one with \(W=\)\(W_{0}\)=10.7 MeV (solid lines). (b) The calculated partial wave cross sections of elastic scattering under \(W=\)0 for even \(L\) and for (c) odd \(L\).
Figure 5: (Color online) Phase shifts in \(\alpha\) + \({}^{92}\)Zr scattering calculated with the double folding potential with \(N_{R}\)=1.22, (a) even \(L\) partial waves and (b) odd \(L\) partial waves, are displayed for \(L\leq\)15. The lower abscissa shows cener-of-mass energy \(E\) and the upper abscissa shows laboratory incident energy \(E_{\alpha}\).
## V Alpha cluster structure in \({}^{96}\)Mo and neutrinoless double \(\beta\) decay of \({}^{96}\)Zr
In order to reveal the cluster structure of \({}^{96}\)Mo underlying in the excitation function with the characteristic dip at the extreme backward angle, the resonant states and the bound and quasibound energy levels calculated in the double folding potential with \(N_{R}\)=1.22 by switching off the imaginary potential of the optical potential used in Fig. 3 are displayed in Fig. 6 (a). The resonance energies are given at the energies where the phase shifts steeply pass through \(\delta_{L}\)=90\({}^{\circ}\)(270\({}^{\circ}\)) in Fig. 5. By investigating the resonant wave function for \(L=12\) at \(E=22.04\) MeV, we find that the wave function has four nodes in the relative motion, see Fig. 7. The resonances with \(L\)=10, 12, and 14 in the range of \(E\)=19-25 MeV are found to belong to the band with \(N=2n+L=20\), where \(n\) is the number of the nodes in the relative wave function between \(\alpha\) and \({}^{92}\)Zr. The \(N=20\) band states energies are on the \(J(J+1)\) plot with the bandhead \(J^{\pi}\)=0\({}^{+}\) state at \(E\)=14.4 MeV and the rotational constant \(k\)=\(\hbar^{2}/2\mathcal{J}\)=0.0492 MeV, where \(\mathcal{J}\) is the moment of inertia of the band. The band has a well-developed \(\alpha\)+\({}^{92}\)Zr cluster structure. The large separation distance between \(\alpha\) and \({}^{92}\)Zr can be seen in the the wave functions of the 10\({}^{+}\) and 12\({}^{+}\) states in Fig. 7. The outermost peak, which is located at around \(R\)=7-8 fm, is much larger than the sum of the experimental radius [69] of \(\alpha\) and \({}^{92}\)Zr, 6.0 fm. Although the phase shifts for the lower \(L=0-6\) of the \(N=20\) band show rising toward \(\delta_{L}\)=270\({}^{\circ}\), they do not cross \(\delta_{L}\)=270\({}^{\circ}\) sufficiently. However, since the number of the nodes \(n\) of their wave functions satisfy the condition \(N=2n+L=20\), they are considered to belong to the persistent member of the rotational band with \(N=20\). From the \(J(J+1)\) plot they are extrapolated to exist persistently at the energies indicated by the dotted lines in Fig. 6 (a). The resonance energies and widths of these broad resonances can be calculated in the complex scaling method [70; 71]. The presence of the 12\({}^{+}\) state of the \(N=20\) band, which manifests itself in the emergence of the characteristic dip in the back-angle excitation function, demonstrates for the first time the existence of a second-higher nodal band member state with the \(\alpha\) + \({}^{92}\)Zr cluster structure in \({}^{96}\)Mo, in which two more nodes are excited in the relative motion compared with the \(N=\)16 ground band.
The wave functions of the resonances with odd \(L\) in Fig. 5 have \(N=19\) and form a negative-parity rotational band with the \(\alpha\)+\({}^{92}\)Zr cluster structure. The band states are well located on the \(J(J+1)\) plot with its bandhead 1\({}^{-}\) state at \(E=11.1\) MeV and \(k\)=0.0383 MeV. The \(N=19\) band is a higher nodal band with developed \(\alpha\) clustering, in which the relative motion is one more excited compared with the lower-lying \(N=17\) band states. The calculation locates the \(N=18\) rotational band with its bandhead 0\({}^{+}\) at \(E\)=7.19 MeV and \(k\)=0.0236 MeV, which is a higher nodal band with one more node in the wave functions compared with those of the Pauli-allowed lowest \(N=16\) band. The \(N=17\) rotational band states are well located on the \(J(J+1)\) plot with its bandhead 1\({}^{-}\) state at \(E\)=1.33 MeV. The calculation locates the band states with \(N=16\) below the \(\alpha\) threshold. It is surprising that the Pauli-allowed lowest \(N=16\) band states satisfying the Wildermuth condition falls in good correspondence with the ground band of \({}^{96}\)Mo. The calculated 0\({}^{+}\) state of the \(N=16\) band with \(E\)=-5.56 MeV is slightly overbound by 2.8 MeV compared with the experimental energy of the ground state with \(E=\)-2.76 MeV from the \(\alpha\) threshold. This is because the potential determined at the highly excited energy region, \(E_{\alpha}\)=23 MeV, is straightforwardly applied to the calculations in the bound-state energy region. The energy levels for \(N=\)18, 17 and 16 in Fig. 6, most of which are located below the Coulomb barrier, are the ones calculated in the bound-state approximation.
According to the dispersion relation [72], the energy dependence of the volume integral of the real potential shows the threshold anomaly. Namely the volume inte
Figure 6: (Color online) (a) Energy levels of \({}^{96}\)Mo calculated in the \(\alpha\)+ \({}^{92}\)Zr cluster model with the double folding potential with \(N_{R}\)=1.22. The calculated excitation energy of the \(N\)=16 ground-band states, 0\({}^{+}\), 2\({}^{+}\), 4\({}^{+}\), 6\({}^{+}\) and 8\({}^{+}\), which look compressed, increases as the spin increases. (b) The \(N=16\) band energy levels calculated using the double folding model with \(L\) dependence. (c) Experimental energy levels of the ground band. The horizontal dashed lines (blue) correspond to \(E_{\alpha}\)=18 MeV (center-of-mass energy \(E\)=17.3) and \(E_{\alpha}\)=26 MeV (\(E\)=24.9 MeV), between which the characteristic dip in the excitation function appear.
gral \(J_{V}\) increases as the incident energy decreases from the rainbow energy region to the lower-energy region of BAA and reaches a maximum followed by a decrease toward \(E_{\alpha}\)=0. In fact, in Table 1 and 2 we see that \(J_{V}\) increases from 280.5 MeVfm\({}^{3}\) at the rainbow energy \(E_{\alpha}\)=120 MeV to 318.9 MeVfm\({}^{3}\) at \(E_{\alpha}\)=35.4 MeV and 329.6 MeVfm\({}^{3}\) at \(E_{\alpha}\)=23 MeV. The dispersion relation tells that a potential with a reduced \(J_{V}\) value should be used in the bound and quasibound energy region below and near the threshold energy \(E\)=0. The overbinding of the ground-state energy in Fig. 6 (a) is simply ascribed to that it is calculated using the potential with \(N_{R}\)=1.22 at \(E_{\alpha}\)=23 MeV with the large \(J_{V}\) value without taking into account the energy dependence of the real potential due to the dispersion relation [72]. By using the double folding potential with a slightly reduced strength, \(N_{R}\)=1.182 with \(J_{V}\)=319.3 MeVfm\({}^{3}\), the calculated ground-state 0\({}^{+}\) energy agrees with the experimental value as seen in Fig. 6 (b). A similar situation where \(J_{V}\) must be reduced in the bound and quasibound energy region compared with that used in the higher scattering energy region has been reported in the recent unified description of bound and scattering states for the \(\alpha\)+\({}^{48}\)Ca cluster structure in \({}^{52}\)Ti [36] and the \(\alpha\)+\({}^{44}\)Ca cluster structure in \({}^{48}\)Ti [28].
In Fig. 6 (a) the calculated \(N\)=16 ground-band states are very compressed, which is also the case for the \(N=16\) states calculated with \(N_{R}\)=1.182. Although the conventional Wood-Saxon potential gives an inverted energy level spectrum in this heavy mass region, namely the excitation energy of the ground- band states decreases as the spin increases in disagreement with experiment, the present double folding model potential gives the energy spectrum consistent with experimental ground band. In fact, the excitation energy of the calculated energy levels of the \(N\)=16 band, which looks almost degenerate in Fig. 6 (a), increases as the spin increases from 0\({}^{+}\) to 8\({}^{+}\). This compression is because the angular momentum dependence of the local potential has not been taken into account. In order to discuss the spectroscopic properties in the low-energy region, it is necessary to take into account the \(L\) dependence of the potential. The nucleus-nucleus potential, which is originally non-local due to the the Pauli principle, has \(L\) dependence when represented as a local potential. The \(L\) dependence is usually not important and often neglected in the scattering energy region. However, this \(L\) dependence is important when we study the cluster structure in the bound and quasibound energy region. The necessity of the \(L\) dependence of the intercluster potential due to the Pauli principle has been theoretically founded in the microscopic studies of interactions between composite particles [73; 74]. In fact, it has been shown that this \(L\) dependence is indispensable in the \(\alpha\) cluster structure using a local potential, for example, in \({}^{20}\)Ne [75], \({}^{44}\)Ti [57; 58], \({}^{94}\)Mo [40; 44], \({}^{212}\)Po [40; 45], and \({}^{46,50}\)Cr [76]. Following the double folding potential model study of the \(\alpha\) cluster structure in \({}^{94}\)Mo in Ref. [40] where linear \(L\) dependence in the double folding potential is first discussed, we use \(N_{R}^{(L)}\)=\(N_{R}^{(L=0)}\) - \(c\)\(L\) with \(N_{R}^{(L=0)}\)=1.182 and \(c\)=5.00\(\times\)10\({}^{-3}\) for \({}^{96}\)Mo. The calculated energy levels of the \(N=16\) ground band are displayed in Fig. 6 (b). In Table 3 the calculated \(B(E2)\) values as well as the excitation energies, intercluster rms radii of the ground band of \({}^{96}\)Mo are listed in comparison with the experimental data. The excitation energy of the ground band is reproduced well by the double folding potential model with small \(L\) dependence. The experimental \(B(E2)\) values [77] are also reproduced well by introducing an additional small effective charge \(\Delta e=0.3e\) for protons and neutrons. We note that in the large-scale shell-model calculations in Ref. [52] rather large additional effective charges \(\Delta e=0.5e\) for protons and \(\Delta e=0.5-0.8e\) for neutrons are introduced. The rms charge radius \(<r^{2}>^{1/2}_{\rm e_{8}\rm Mo}\)=4.36 fm of the ground state calculated using the experimental values \(<r^{2}>^{1/2}_{\rm 4He}\)=1.676 fm and \(<r^{2}>^{1/2}_{\rm 92Zr}\)=4.306 fm [69] is in good agreement with the experimental value 4.38 fm [69]. The calculated intercluster distance of the ground
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(J^{x}\) & \(E_{x}\) (MeV) & \(\sqrt{<R^{2}>}\) (fm) & \(B(E2)\) & (W.u.) \\ & exp & cal & cal & exp [77] & cal & Ref.[52] \\ \hline
0\({}^{+}\) & 0.00 & 0.000 & 5.20 & & & \\
2\({}^{+}\) & 0.778 & 0.770 & 5.21 & 20.7\(\pm\)0.4 & 20.7 & 18.7 \\
4\({}^{+}\) & 1.628 & 1.611 & 5.19 & 41\(\pm\)7 & 28.7 & - \\
6\({}^{+}\) & 2.441 & 2.543 & 5.14 & - & 29.1 & - \\
8\({}^{+}\) & 2.978 & 3.583 & 5.05 & - & 26.5 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: The excitation energies \(E_{x}\), intercluster rms radii \(\sqrt{<R^{2}>}\) and \(B(E2)\) values for the \(J\rightarrow(J-2)\) transitions of the ground band in \({}^{96}\)Mo calculated in the \(\alpha\)+\({}^{92}\)Zr cluster model with the double folding potential are compared with the experimental data [77] and the large-scale shell model calculation [52].
Figure 7: (Color online) The calculated \(u(R)\) of the relative wave function \(u(R)/R\) of the 10\({}^{+}\) and 12\({}^{+}\) states of the \(N=\)20 band with the \(\alpha\)+\({}^{92}\)Zr cluster structure in \({}^{96}\)Mo. The wave functions are calculated as scattering states and normalized to unity for \(R\leq\)10 fm.
state is about 87% of the sum of the experimental rms charge radii of the two clusters, which is compared to 87% for the ground state of \({}^{44}\)Ti [58].
Note that the value of the parameter \(c\)=5.00\(\times 10^{-3}\) lies in the expected range of \(\alpha\)-cluster states \(c\)\(\approx\)(2.5-5)\(\times 10^{-3}\), as observed for many \(\alpha\)-cluster states in a wide range of nuclei [78] including the mass region near \(A=100\) such as \({}^{94}\)Mo [31; 40], \({}^{93}\)Nb [43; 79], and \({}^{104}\)Te [78]; light- and medium-weight mass regions such as \({}^{20}\)Ne [80] and \({}^{44}\)Ti [81], and heavy mass region such as \({}^{212}\)Po [40; 82]. The \(L\)-dependent potential calculation locates the negativeity \(N=17\) band with the head \(1^{-}\) state at \(E_{x}\)=6.83 MeV well below the Coulomb barrier. Although \(1^{-}\) states have been observed at \(E_{x}\)=3.600 MeV and 3.895 MeV [77], experimental spectroscopic properties about \(\alpha\) clustering of the excited states near and above \(E_{x}\)\(\approx\)4 MeV are not clear.
The potential embeds the eight deeply bound unphysical Pauli-forbidden \(0^{+}\) states. The overlaps of the eight \(0^{+}\) wave functions with the harmonic oscillator wave functions with \(\nu\)=0.22 fm\({}^{-2}\) (\(\nu=m\omega/\hbar\) and \(\hbar\omega\)=9.12 MeV) are 0.96, 0.93, 0.93, 0.95, 0.97 and 0.98, 0.99, and 0.97 for the oscillator quanta \(N_{\rm HO}\)=0, 2, \(\cdots\), and 14, respectively. This means that the Pauli-forbidden states with \(N_{\rm HO}<16\) in the resonating group method are highly excluded from the obtained ground-state wave function, thus mimicking Saito's orthogonality condition model [83].
As seen in Fig. 8, the ground-state wave function resembles the shell-model function with \(N_{\rm HO}=16\) in the internal region. However, the outermost peak at around 6 fm is slightly shifted outward compared with that of the harmonic oscillator wave function causing significant enhancement of the amplitude at the outer surface region due to \(\alpha\) clustering. This enhancement means that the obtained wave function contains significant amount of components in the shells higher than \(N_{\rm HO}\) =16.
In Fig. 9 the occupation probability of the quanta \(N_{\rm HO}\geq\)16 in the ground state wave function is displayed. The dominant occupation probability in the lowest Pauli-allowed shell-model like \(N_{\rm HO}=16\) is 78%. The significant amount of higher \(N_{\rm HO}\geq 18\) components, 22% is due to the \(\alpha\) clustering of the ground state. The \(2^{+}\) and \(4^{+}\) states have the similar character. This \(\alpha\) clustering is responsible for the enhancement of the \(B(E2)\) values in \({}^{96}\)Mo and should be also taken into account in the evaluation of NME of \(0\nu\beta\beta\) decay of \({}^{96}\)Zr to \({}^{96}\)Mo.
We discuss the effect of \(\alpha\) clustering of \({}^{96}\)Mo on the NME of \(0\nu\beta\beta\) decay of \({}^{96}\)Zr, which was the motivation of the present \(\alpha\) cluster structure study of \({}^{96}\)Mo as introduced in Sec. I. The NME values of \(0\nu\beta\beta\) decay of \({}^{96}\)Zr evaluated by using various nuclear models are summarized in Ref. [7] and most recently in Ref. [19]. The QRPA calculations give NME values 2.72 with the Argonne V18 nucleon-nucleon potential and 2.96 with the CD-Bonn nucleon-nucleon potential with the axial vector coupling constant \(g_{A}\)=1.27 in Ref. [14] and 3.14 with \(g_{A}\)=1.26 in Ref. [15]. The IBM calculation by Barea _et al._ gives 2.53 in Ref. [27]. The latest PHFB calculations give 2.5 [19]. On the other hand, the EDF calculations give considerably large NME values, about twice as large as the above values. The nonrelativistic EDF calculations by Vaquero _et al._[24] give 5.65 with \(g_{A}\)=1.26 when evaluated with shape fluctuation and 6.50 when evaluated with both the shape and pairing fluctuations. The relativistic EDF calculation by Yao _et al._[25] gives almost the same large result. Yao _et al._ claim [25] that the EDF calculations are unable to reproduce the properties of \({}^{96}\)Zr giving too-low excitation energy of E(\(2^{+}_{1}\)) and too-large \(B(E2:0_{\rm g.s.}\to 2^{+}_{1})\) value, which is one order of magnitude large compared with the experimental data. Yao _et al._[25] ascribe this to the overestimation of the collectivity in \({}^{96}\)Zr due to the "common problem of most EDF-based GCM or collective Hamiltonian calculations." Moreover, the GCM calculation in the frame of covariant density functional theory [22] gives the largest value of 6.37 among the nuclear model calculations. The overes
Figure 8: (Color online) The calculated \(u(R)\) of the relative wave function \(u(R)/R\) of the ground state of \({}^{96}\)Mo calculated in the \(\alpha\)+ \({}^{92}\)Zr cluster model (solid line) is compared with the harmonic oscillator wave function with \(N_{\rm HO}\) =16 (dashed line).
Figure 9: (Color online) The occupation probability of the harmonic oscillator quanta \(N_{\rm HO}\) in the ground-state wave function of \({}^{96}\)Mo.
timation of the collectivity of the doubly closed-subshell nucleus \({}^{96}\)Zr increases the overlap of the wave functions of \({}^{96}\)Zr and \({}^{96}\)Mo, which leads to the large NME values. Although the present cluster model is unable to calculate NME values because nucleon degree of freedom is not involved, it can qualitatively tell whether the NME is enhanced or reduced by \(\alpha\) clustering of \({}^{96}\)Mo compared with the shell-model calculations in which the excitations to the higher major shells are not included. Taking into account that the excitation energy 1.58 MeV of the first excited state \(0^{+}\) of \({}^{96}\)Zr is rather high in this mass region resembling the mysterious \(0^{+}\) of the double magic nucleus \({}^{16}\)O [47], and that there is no evidence that \({}^{96}\)Zr has \(\alpha\)+\({}^{92}\)Sr clustering [47], the ground-state wave function can be well considered to have a doubly closed-subshell shell-model structure. Thus the \(\alpha\) clustering of \({}^{96}\)Mo reduces considerably the overlap of the ground-state wave function of \({}^{96}\)Zr with that of \({}^{96}\)Mo in the evaluation of the NME. That is, the \(0\nu\beta\beta\) decay of \({}^{96}\)Zr to \({}^{96}\)Mo would be significantly quenched, thus have a longer half-life, due to the \(\alpha\) clustering than that in the shell-model calculations which do not take into account the four particle excitations. Unfortunately, NME values in the shell model have not been reported. The shell-model calculations with configuration mixing including \(N_{\rm HO}=18\), 20 and 22 major shells are presently formidably difficult even with modern computers. We note that both the QRAP and IBM calculations do not include \(\alpha\)-like four-particle four-hole excitations and \(\alpha\)-like correlations.
Finally, we briefly mention about the large \(B(E2)\) value 51.7 W.u. of the transition from \(0^{+}_{2}\) (\(E_{x}\)=1.148 MeV) to \(2^{+}_{1}\) (\(E_{x}\)=0.778 MeV) in \({}^{96}\)Mo [77], which may suggest that the \(0^{+}_{2}\) state has \(\alpha\) clustering in which the core is excited. If the \(0^{+}_{2}\) state has significant amount of [\(\alpha_{(L=2)}\)+\({}^{92}\)Zr(\(2^{+}_{1}\))]\({}_{J=0}\) clustering component, then the \(B(E2)\) value can be enhanced because in addition to the \(E2\) transition matrix element due to the inter-cluster relative motion, <2\({}^{+}_{1}\)(\(\alpha_{L=0}\))\(|\hat{O}_{E2}^{2}({\bf r})|0^{+}_{2}\)(\(\alpha_{L=2}\))>, the internal transition of the core \({}^{92}\)Zr, <\({}^{92}\)Zr(g.s.) [\(\hat{O}_{E2}(\xi)\)]\({}^{92}\)Zr(\(2^{+}_{1}\))>, contributes to the total \(E2\) transition where \(\xi\) is the internal coordinate of \({}^{92}\)Zr. Coupled-channels calculations with excitations of \({}^{92}\)Zr would be a future challenge to understand the origin of the large \(B(E2)\) value of the \(0^{+}_{2}\) state of \({}^{96}\)Mo and the effective charge.
## VI Summary
In the evaluation of nuclear matrix element of neutrinoless double \(\beta\) decay \(0\nu\beta\beta\) of the doubly closed-subshell nucleus \({}^{96}\)Zr to \({}^{96}\)Mo, it is important to take into account the collectivity due to \(\alpha\) clustering in the structure of \({}^{96}\)Mo, which has extra two neutrons on the \({}^{94}\)Mo nucleus, which is analog of \({}^{20}\)Ne and \({}^{44}\)Ti and has been considered to have \(\alpha\) cluster structure. We have studied for the first time \(\alpha\) clustering aspects of \({}^{96}\)Mo by using a double folding potential determined from the analysis of nuclear rainbows at high energies and the characteristic structure of the angular distributions at low energies in \(\alpha\) particle scattering from \({}^{92}\)Zr. The validity of the double folding potential used is also confirmed by studying \(\alpha\) scattering from \({}^{90,91.94}\)Zr in the low-energy region where a characteristic dip appears in the excitation functions at the extreme backward angle near \(180^{\circ}\). The double folding model calculations reproduced well all the observed angular distributions over a wide range of incident energies and the excitation functions with a characteristic dip at the extreme backward angle. By studying the \(\alpha\) cluster structure with the obtained double folding potential, the existence of the second-higher nodal \(N=20\) band states with the \(\alpha\)- \({}^{92}\)Zr cluster structure, in which two more nodes are excited in the relative motion compared with the \(N=16\) ground band in \({}^{96}\)Mo, is demonstrated for the first time at the highly excited energy region. The \(\alpha\)-cluster model using this potential locates the ground state in agreement with experiment and reproduces the observed \(B(E2)\) value of \({}^{96}\)Mo. The effect of \(\alpha\) clustering in \({}^{96}\)Mo on the the half-life of the \(0\nu\beta\beta\) double-\(\beta\) decay of \({}^{96}\)Zr is discussed.
###### Acknowledgements.
One of the authors (S.O.) thanks the Yukawa Institute for Theoretical Physics, Kyoto University where part of the work was done during a stay in 2022.
|
2309.03759 | M(otion)-mode Based Prediction of Ejection Fraction using
Echocardiograms | Early detection of cardiac dysfunction through routine screening is vital for
diagnosing cardiovascular diseases. An important metric of cardiac function is
the left ventricular ejection fraction (EF), where lower EF is associated with
cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology,
with ultrasound being a low-cost, real-time, and non-ionizing technology.
However, human assessment of echocardiograms for calculating EF is
time-consuming and expertise-demanding, raising the need for an automated
approach. In this work, we propose using the M(otion)-mode of echocardiograms
for estimating the EF and classifying cardiomyopathy. We generate multiple
artificial M-mode images from a single echocardiogram and combine them using
off-the-shelf model architectures. Additionally, we extend contrastive learning
(CL) to cardiac imaging to learn meaningful representations from exploiting
structures in unlabeled data allowing the model to achieve high accuracy, even
with limited annotations. Our experiments show that the supervised setting
converges with only ten modes and is comparable to the baseline method while
bypassing its cumbersome training process and being computationally much more
efficient. Furthermore, CL using M-mode images is helpful for limited data
scenarios, such as having labels for only 200 patients, which is common in
medical applications. | Ece Ozkan, Thomas M. Sutter, Yurong Hu, Sebastian Balzer, Julia E. Vogt | 2023-09-07T15:00:58Z | http://arxiv.org/abs/2309.03759v1 | # M(otion)-mode Based Prediction of Ejection Fraction using Echocardiograms
###### Abstract
Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases. An important metric of cardiac function is the left ventricular ejection fraction (EF), where lower EF is associated with cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology, with ultrasound being a low-cost, real-time, and non-ionizing technology. However, human assessment of echocardiograms for calculating EF is time-consuming and expertise-demanding, raising the need for an automated approach. In this work, we propose using the M(otion)-mode of echocardiograms for estimating the EF and classifying cardiomyopathy. We generate multiple artificial M-mode images from a single echocardiogram and combine them using off-the-shelf model architectures. Additionally, we extend contrastive learning (CL) to cardiac imaging to learn meaningful representations from exploiting structures in unlabeled data allowing the model to achieve high accuracy, even with limited annotations. Our experiments show that the supervised setting converges with only ten modes and is comparable to the baseline method while bypassing its cumbersome training process and being computationally much more efficient. Furthermore, CL using M-mode images is helpful for limited data scenarios, such as having labels for only 200 patients, which is common in medical applications.
Keywords:Echocardiography M-mode Ultrasound Ejection Fraction Computer Assisted Diagnosis (CAD)
## 1 Introduction
Cardiovascular diseases (CVD) are the leading cause of death worldwide, responsible for nearly one-third of global deaths [29]. Early assessment of cardiac dysfunction through routine screening is essential, as clinical management and
behavioral changes can prevent hospitalizations and premature deaths. An important metric for assessing cardiac (dys)function is the left ventricular (LV) ejection fraction (EF), which evaluates the ratio between LV end-systolic and end-diastolic volumes [3; 21].
Echocardiography is the most common and readily available diagnostic tool to assess cardiac function, ultrasound (US) imaging being a low-cost, non-ionizing, and rapid technology. However, the manual evaluation of echocardiograms is time-consuming, operator-dependent, and expertise-demanding. Thus, there is a clear need for an automated method to assist clinicians in estimating EF.
M(otion)-mode is a form of US, in which a single scan line is emitted and received at a high frame rate through time to evaluate the dynamics to assess different diseases [23]. M-mode is often utilized in clinical practice e. g. in lung ultrasonography [1; 25] or echocardiography [6; 7; 26; 10]. Since cardiac function assessment relies on heart dynamics, M-mode images can be an excellent alternative to B(rightness)-mode image- or video-based methods. However, little effort is directed toward exploiting M-mode images in an automated manner.
Data collection and annotation are expensive for most applications. Therefore, learning from limited labeled data is critical in data-limited problems, such as in healthcare. To overcome this data bottleneck, self-supervised learning (SSL) methods have been recently proposed to learn meaningful high-level representations from unlabeled data [16; 24].
**Related Work** A few existing works [14; 18] reconstruct M-mode images from B-mode videos to detect pneumothorax using CNNs. Furthermore, authors in [27] propose an automatic landmark localization method in M-mode images. A more related method using M-mode images in an automated manner to estimate EF is [22], which uses single M-mode images in parasternal long-axis view to measure chamber dimensions for calculating EF.
For automated EF prediction, some previous works exploit either still-images [17; 31; 8] or spatio-temporal convolutions on B(rightness)-mode echocardiography videos [21]. However, still-image-based methods have a high variability [20], and video-based methods rely on a complex pipeline with larger models. Furthermore, [19] uses vision transformers and CNNs to tackle the problem of estimating the LV EF, and [15] uses geometric features of the LV derived from ECG video frames to estimate EF. The authors in [28] evaluate ML-based methods in a multi-cohort setting using different imaging modalities. In the SSL setting, [5] propose a contrastive learning framework for deep image regression, which consists of a feature learning branch via a novel adaptive-margin contrastive loss and a regression prediction branch using echocardiography frames as input.
**Our Contribution** We propose to extract images from readily available B-mode echocardiogram videos, each mimicking an M-mode image from a different scan line of the heart. We combine the different artificial M-mode images using off-the-shelf model architectures and estimate their EF to diagnose cardiomyopathy in a supervised regime. Using M-mode images allows the model to naturally observe the motion and sample the heart from different angles while bypassing cumbersome 3D models. Secondly, we propose an alternative scheme for pre
dicting EF using generated M-mode images in a self-supervised fashion while extending contrastive learning. We design a problem-specific contrastive loss for M-mode images to learn representations with structure and patient awareness. We evaluate both regimes on the publicly available EchoNet-Dynamic dataset ([20]) and demonstrate both models' effectiveness.
To the best of our knowledge, this is the first work on image-based and temporal information incorporating cardiac function prediction methods to estimate EF. Furthermore, our method can easily be applied to other problems where cardiac dynamics play an essential role in the diagnosis. To ensure reproducibility, we made the code available: [https://github.com/thomassutter/mmodeecho](https://github.com/thomassutter/mmodeecho).
## 2 Methods
This work aims to create a pipeline with as little intervention as possible; thus, our method consists of two parts, as shown in Figure 1. The first part is extracting M-mode images from readily available B-mode videos. The second part includes representation learning, which are lower-level information that preserves more information of the input image and are used to predict EF from M-mode images, including two schemes: supervised and self-supervised learning.
### From B-mode Videos to M-mode Images
Assume our dataset contains \(N\) patients. For each patient \(i=\{1,2,\cdots,N\}\), the label \(y_{i}\) indicates its EF. Furthermore, the B-mode echocardiogram video of each patient \(i\) is given of size \(h\times w\times t\) with \(h\) being height, \(w\) width, and \(t\) number of frames of the video. The \(m\)-th M-mode image of patient \(i\) is given as \(\mathbf{x}_{i}^{m}\) with \(m=\{1,2,\cdots,M\}\). It is a single line of pixels through the center of the image with an angle \(\theta_{m}\) over frames, assuming LV is around the center throughout the video, as in Figure 1(a). This image, corresponding to \(\theta_{m}\), is then of size \(s_{m}\times t\), with \(s_{m}\) as the length of the scan line. For simplicity, we set \(s_{m}=h\;\forall\;m\) independent of its angle \(\theta_{m}\). For generating multiple M-mode images, a set of \(M\) angles \(\mathbf{\theta}=[\theta_{1},\dots,\theta_{M}]\) is used to generate \(M\) M-mode images, where the angles \(\mathbf{\theta}\) are equally spaced between \(0^{\circ}\) and \(180^{\circ}\).
While the proposed approach for generating M-mode images is intuitive and works well (see Section 3.3), other approaches are also feasible. For instance, the center of rotation in the middle of the image in our M-mode generation process could be changed. Like that, we could mimic the behavior of the data collection process as every generated M-mode image would resemble a scan line of the US probe. However, the main goal of this work is to highlight the potential of M-mode images for the analysis of US videos. Given our convincing results, we leave the exploration of different M-mode generation mechanisms for future work.
### Learning Representations from M-mode Images
#### 2.2.1 Supervised Learning for EF Prediction
We aim to learn supervised representations using off-the-shelf model architectures to estimate EF. Instead of
using a single M-mode, one can aggregate the information of M-mode images from the same patient to increase robustness. We evaluate two fusion methods for aggregating information among the \(M\) M-mode images: early-fusion and late-fusion [2]. With early fusion, we construct a \(M\times s\times t\) image with the \(M\) M-mode images being the \(M\) channels of the newly created image. In late-fusion, we exploit three different methods. For all of the late-fusion schemes, we first infer an abstract representation \(\mathbf{z}_{i}^{m}\) for every M-mode image \(\mathbf{x}_{i}^{m}\). The representations \(\mathbf{z}_{i}^{m}\) are then aggregated to a joint representation \(\mathbf{\tilde{z}}_{i}\) using an LSTM cell [11], averaging, or concatenating.
We utilize a standard ResNet architecture [9] with 2D-convolutional layers independent of the fusion principle. With 2D-convolutions, we assume a single M-mode image as a 2D gray-scale image with two spatial dimensions, \(s\) and \(t\).
Figure 1: Overview of our proposed method. (a) Generate M-mode images from B-mode echocardiography videos at different scan lines. (b) Learn representations from the generated M-mode images using supervised and self-supervised learning schemes. (c) Evaluate EF prediction to diagnose cardiomyopathy.
#### 3.2.2 Self-Supervised Learning for EF Prediction
This part aims to learn meaningful representations from unlabeled data to estimate EF using echocardiography-grams. To this end, we propose an SSL scheme for M-mode images based on contrastive learning, where M-mode images from the same patient can naturally serve as positive pairs since they share labels for many downstream tasks. As discussed by [30], bio-signal data is inherently highly heterogeneous; thus, when applying learning-based methods to patient data, we need to consider both the similarity and the difference between samples originating from the same patient. Thus, we propose a problem-specific contrastive loss with patient and structure awareness, as shown in Figure 2.
Figure 2: Overview of our proposed SSL method. The contrastive loss includes (a) patient awareness to attract similarity between data from the same patient while discouraging it between different patients and (b) structure awareness to take the (possible) dissimilarity from the same patient into account.
#### Contrastive Learning Framework
The framework contains training and evaluation stages and the overview is illustrated in Figure 3. In the training stage, we optimize the model with the contrastive loss leveraging the information from underlying structures of the unlabeled images. In the evaluation stage, a multi-layer perceptron (MLP) head is trained on top of the learned representations in a supervised manner.
For each generated M-mode image \(\mathbf{x}_{i}^{m}\), we generate its augmented view \(\mathbf{x}_{i}^{v(m)}\) using the \(Aug(\cdot)\) module. So the augmented dataset is represented as \(\{(\mathbf{x}_{i}^{m},\ \mathbf{x}_{i}^{v(m)},\ y_{i})\}\). The encoder network \(Enc(\cdot)\) maps each image \(\mathbf{x}_{i}^{m}\) to a feature vector \(\mathbf{z}_{i}^{m}\). We utilize a standard ResNet architecture [9].
In the training stage, \(\mathbf{z}_{i}^{m}\) is normalized to the unit hyper-sphere before being passed to the projection network. Following the work [4], we introduce a learnable non-linear projection network between the representation and the contrastive loss. The projection network \(Proj(\cdot)\) takes the normalized lower-level representation \(\mathbf{z}_{i}^{m}\) as input and outputs the higher-level representation \(\mathbf{p}_{i}^{m}\). We use a two-layer MLP with ReLU activation as \(Proj(\cdot)\) in this work.
In the evaluation stage, we initialize the parameters of the encoder network \(Enc(\cdot)\) with the model obtained from contrastive learning and add an MLP head \(Head(\cdot)\) to the top. For each patient \(i\), we have \(M\) feature vectors \(\mathbf{z}_{i}^{m}\in\mathbb{R}^{K}\). The \(M\) vectors are then fused to get the joint representation \(\mathbf{\tilde{z}}_{i}\in\mathbb{R}^{K\times M}\) and passed to \(Head(\cdot)\). One can have different fusion methods for aggregating information among the \(M\) vectors, e. g. using an LSTM cell [11], averaging, or concatenating.
Figure 3: Schema of the contrastive learning framework with training and evaluation stages. The training stage exploits the contrastive loss to learn a representation leveraging the unlabelled images. The evaluation stage exploits these learned representations in a supervised manner to predict EF.
#### Contrastive Loss for M-mode Images
To account for (dis)similarities, we design two loss functions for learning both patient- and structure-awareness.
(a) Patient-aware loss: The goal is to attract the representations from the same patient to be similar while pushing apart representations from different patients (see Figure 2 (a)). This enforces two M-mode images to be considered similar if they are from the same patient and dissimilar if they are from different patients. The patient-aware loss is given as:
\[L^{PA}=-\frac{1}{M-1}\sum_{i=1}^{N}\sum_{m=1}^{M}\sum_{l\neq m}\log\frac{\exp( \boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{i}^{l}/\tau)}{\sum_{j,k}\exp( \boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{j}^{k}/\tau)-\exp(\boldsymbol{p}_{i }^{m}\cdot\boldsymbol{p}_{i}^{m}/\tau)} \tag{1}\]
where \(N\) is the number of patients in one batch, \(M\) is the number of original M-mode images used for each patient, and \(\tau\) is the temperature scaling parameter. The term \(\boldsymbol{p}_{i}^{m}\) represents the output of \(Proj(\cdot)\).
Inspired by [30], we tried defining a neighborhood function to limit the similarity of M-mode images from the same patient. However, incorporating neighbourhood to patient-awareness did not further improve the results; thus, we used all M-mode images per patient to define the patient-aware loss.
(b) Structure-aware loss: If we only use patient-aware loss \(L^{PA}\), there exists a risk that all images from the same patient collapse to a single point [30]. So we propose the structure-aware loss to introduce some diversity (see Figure 2 (b)). To incorporate this into the learned representations, we construct positive pairs from each M-mode image with its augmentation and consider other combinations as negative pairs. It is then defined as:
\[L^{SA}=-\sum_{i=1}^{N}\sum_{m=1}^{2M}\log\frac{\exp(\boldsymbol{p}_{i}^{m} \cdot\boldsymbol{p}_{i}^{v(m)}/\tau)}{\sum_{l\neq m}\exp(\boldsymbol{p}_{i}^{m }\cdot\boldsymbol{p}_{i}^{l}/\tau)} \tag{2}\]
If image \(m\) is an original image, then \(v(m)\) represents its augmented view; if image \(m\) is an augmented image, then \(v(m)\) represents the original image. Minimizing \(L^{SA}\) drives the representation pairs from the augmented images in the numerator close while pushing the representations in the denominator far away, where the denominator contains M-mode images from the same patient.
Finally, we combine the two losses to get structure-aware and patient-aware contrastive loss for M-mode images using the hyperparameter \(\alpha\) to control the trade-off between the awareness terms:
\[L^{CL}=\alpha L^{PA}+(1-\alpha)L^{SA}. \tag{3}\]
## 3 Experiments and Results
### Dataset
We use the publicly available EchoNet-Dynamic dataset [20]. It contains \(10^{\prime}030\) apical-\(4\)-chamber echocardiography videos from individuals who underwent imag
ing between 2016-2018 as part of routine clinical care at Stanford University Hospital. Each B-mode video was cropped and masked to remove information outside the scanning sector and downsampled into standardized \(112\times 112\) pixel videos. For simplicity, we used videos with at least 112 frames. We use the official splits with 7465 training, 1289 validation, and 1282 test set samples.
### Experimental Setup
We evaluate the models' performance using classification accuracy for five random seeds and report the mean performance and standard deviation. During training, all supervised models optimize the estimation of EF as a regression task. For testing, we use a constant threshold \(\tau\) for classifying cardiomyopathy. In all experiments, we set \(\tau=0.5\). Hence, an estimation of \(\hat{\tau}<0.5\) results in classifying a sample as cardiomyopathy.
We evaluate all models using the area under the receiver operating characteristic (AUROC) and the area under the precision-recall curve (AUPRC) with respect to whether a patient is correctly classified as healthy or cardiomyopathy-pathic. Additionally, we report the mean absolute error (MAE) and the root mean squared error (RMSE) of the predicted EF with respect to the true EF in the Supplementary Material. We report the mean performance, including standard deviations over five random seeds for all results.
We use the training set from EchoNet for pre-training (SSL), and apply a linear learning rate scheduler during the first 30 epochs as warm-up. For the supervised fine-tuning, we select different proportions of the training set in the limited labeled data scenario. All M-mode models are trained for 100 epochs using Adam optimizer [12] with an initial learning rate of 0.001 and a batch size of 64. For image augmentation, we apply random horizontal flip and Gaussian noise. For the fusion method of the the M-mode representations we used concatenation. For the EchoNet model, we use the same model and parameters as in [21]. The model is trained for 45 epochs with a learning rate of 0.0001 and a batch size of 20. We do not use test-time augmentation for any of the models. We report the full set of hyperparameters used in our experiments in Table 1.
### Results and Discussion
#### 3.3.1 Evaluating M-mode Images in Supervised Setting
We train and evaluate models with different numbers of M-modes for \(M\in\{1,2,5,10,20,50\}\). We use the complete training set, including labels, as we are interested in the performance of the models depending on the number of available M-modes. Figure 4 shows the results for different numbers of M-modes. We see that late fusion models benefit from an increasing number of modes, whereas the early fusion method overfits quickly and never achieves a comparable performance.
#### 3.3.2 Evaluating Limited Data Regime
We evaluate the accuracy of the different models introduced in Section 2 for different amount of labeled training samples.
Figure 4: Performance for different numbers of M-mode images using early and late-fusion methods. In (a), we evaluate the classification performance with respect to AUPRC and AUROC in (b), the regression performance with respect to RMSE in (c), MAE in (d), and \(R^{2}\)-score in (e).
As most medical datasets do not have the size of EchoNet-Dynamic [13], methods for medical machine learning should perform best in the limited labeled data regime. We use _E2E_ for the supervised and _CL_ for the self-supervised setting.
Additionally, we introduce _E2E+_ and _CL+_, which, inspired by EchoNet [21], uses random short clips for each training epoch. Both models use M-mode images of 32 frames with a sampling period of 2. We train and evaluate models using \(p\%\) of the full training set for \(p\in\{1,2,3,5,10,20,30,\)\(50,75,100\}\). All M-mode methods are trained with \(M=10\).
Figure 5 shows the limited labeled data experiment results. Although we are not able to reach the performance of the EchoNet model for any number of modes (see Figure 3(b)) if the number of labeled training samples is high (see Figure 4(a)), both supervised and self-supervised learning methods using M-mode instead of B-mode can outperform the EchoNet model in the low labeled data regime (\(p<5\%\), Figure 4(b)). Also, we observe that using shorter clips is useful for the self-supervised learning methods, with _CL+_ being able to achieve an AUROC over 0.85 with only around 200 labeled samples.
#### 4.2.2 Computational Cost
Furthermore, we compare the number of parameters and computational costs for different models in Table 2, where we used a multi-GPU setup with four NVIDIA GeForce RTX 2080 Ti GPUs. We report the computation time in seconds per batch (sec/B) and milliseconds per sample (msec/sample), and the memory requirements in gigabytes per batch (GB/B).
Our proposed M-mode image based models require around six times less time and ten times less memory to train and run inference per sample. Given the used
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Parameter & Value & Description \\ \hline lr\_sup & 0.001 & learning rate for supervised training \\ lr\_cl & 1.0 & learning rate for SSL training \\ opt & Adam & optimizer for SSL and supervised training \\ bsz\_sup & 64 & batch size for supervised training \\ bsz\_cl & 256 & batch size for SSL training \\ epoch\_sup & 100 & epochs for supervised training \\ epoch\_cl & 300 & epochs for SSL training \\ epoch\_warm & 30 & warm-up epochs for SSL training \\ \(\alpha\) & 0.8 & loss trade-off \\ \(\tau\) & 0.01 & temperature scaling \\ Dim\_e & 512 & \(Enc(\cdot)\) output dimension \\ Dim\_ph & 2048 & \(Proj(\cdot)\) hidden layer dimension \\ Dim\_po & 128 & \(Proj(\cdot)\) output dimension \\ Dim\_lstm & 256 & LSTM output dimension \\ \hline \hline \end{tabular}
\end{table}
Table 1: List the hyperparameters used in our experiments. We use the same hyper-parameters for E2E setup and the fine-tuning stage of SSL setup (denoted as ”_sup” in Table 1). ”_cl” denotes the hyper-parameters used in the SSL pre-training stage.
memory per batch, we could increase the batch size for the M-mode methods, lowering the computation time per sample even further, whereas the baseline model is already at the limit due to its architecture.
## 4 Discussion and Conclusion
In this work, we propose to generate M-mode images from readily available B-mode echocardiography videos and fuse these to estimate EF and, thus, cardiac dysfunction. Our results show that M-mode-based prediction methods are comparable to the baseline method while avoiding its complex training routine and reducing the computational cost and the need for expensive expert input.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Time (sec/B)} & \multicolumn{2}{c}{Time (msec/sample)} & \multicolumn{2}{c}{Memory (GB/B)} \\ \cline{3-10} Model & BS \#Params (Mio.) & Train & Test & Train & Test & Train & Test \\ \hline EchoNet & 20 & 31.5 & 2.898 & 2.474 & 144.9 & 123.7 & 5.294 & 1.187 \\ E2E \& CL & 64 & 11.7 & 1.568 & 1.330 & 24.5 & 21.1 & 1.013 & 0.120 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computational costs. We evaluate the EchoNet and the proposed M-mode methods with respect to the number of parameters, the computation time, and the memory requirements. All M-mode models are evaluated using \(M=10\). E2E defines the end-to-end supervised and CL the contrastive learning approach.
Figure 5: Results for different training set sizes using the proposed end-to-end supervised (E2E) and contrastive learning (CL) approaches. In (a), we train and evaluate the models on 10%-100% labeled training samples, in (b) only on 1%-10% of the samples. E2E and CL models are trained using a fixed long clip with length 112; E2E+ and CL+ are trained using random short clips with length 32. CL freeze and CL+ freeze are fine-tuned with the encoder parameters frozen.
Conventional M-mode images have a very high sampling rate, which results in a high temporal resolution so that even very rapid motion can be recorded. The generated M-mode images have significantly less temporal resolution than the conventional M-mode images from US machines. However, our results indicate that exploiting generated M-mode images does not limit the performance for EF estimation. As we do not use the M-mode images collected directly from the US machines, there is no need for an additional data collection step.
Additionally, we show the potential of pre-trained methods. In scenarios where expensive expert labels are not readily available, pre-training using unlabeled M-mode images outperforms more complicated pipelines highlighting the potential of M-Mode based pipelines for clinical use cases. In our future work, we want to investigate the use cases for M-mode on different diseases and further improve the performance of the proposed pre-training pipeline.
## 5 Acknowledgements
EO was supported by the SNSF grant P500PT-206746 and TS by the grant 2021-911 of the Strategic Focal Area "Personalized Health and Related Technologies (PHRT)" of the ETH Domain (Swiss Federal Institutes of Technology). |
2309.04036 | One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor
Attack on Deep Learning | Image camouflage has been utilized to create clean-label poisoned images for
implanting backdoor into a DL model. But there exists a crucial limitation that
one attack/poisoned image can only fit a single input size of the DL model,
which greatly increases its attack budget when attacking multiple commonly
adopted input sizes of DL models. This work proposes to constructively craft an
attack image through camouflaging but can fit multiple DL models' input sizes
simultaneously, namely OmClic. Thus, through OmClic, we are able to always
implant a backdoor regardless of which common input size is chosen by the user
to train the DL model given the same attack budget (i.e., a fraction of the
poisoning rate). With our camouflaging algorithm formulated as a
multi-objective optimization, M=5 input sizes can be concurrently targeted with
one attack image, which artifact is retained to be almost visually
imperceptible at the same time. Extensive evaluations validate the proposed
OmClic can reliably succeed in various settings using diverse types of images.
Further experiments on OmClic based backdoor insertion to DL models show that
high backdoor performances (i.e., attack success rate and clean data accuracy)
are achievable no matter which common input size is randomly chosen by the user
to train the model. So that the OmClic based backdoor attack budget is reduced
by M$\times$ compared to the state-of-the-art camouflage based backdoor attack
as a baseline. Significantly, the same set of OmClic based poisonous attack
images is transferable to different model architectures for backdoor implant. | Guohong Wang, Hua Ma, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Wei Kang, Said F. Al-Sarawib, Gongxuan Zhang, Derek Abbott | 2023-09-07T22:13:14Z | http://arxiv.org/abs/2309.04036v2 | # One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning
###### Abstract
Image camouflage has been utilized to create clean-label poisoned images for implanting backdoor into a DL model. But there exists a crucial limitation that one attack/poisoned image can only fit a single input size of the DL model, which greatly increases its attack budget when attacking multiple commonly adopted input sizes of DL models.
This work proposes to constructively craft an attack image through camouflaging but can fit multiple DL models' input sizes simultaneously, namely OmClic. Thus, through OmClic, we are able to always implant a backdoor regardless of which common input size is chosen by the user to train the DL model given the same attack budget (i.e., a fraction of the poisoning rate). With our camouflaging algorithm formulated as a multi-objective optimization, \(M=5\) input sizes can be concurrently targeted with one attack image, which artifact is retained to be almost visually imperceptible at the same time. Extensive evaluations validate the proposed OmClic can reliably succeed in various settings using diverse types of images. Further experiments on OmClic based backdoor insertion to DL models show that high backdoor performances (i.e., attack success rate and clean data accuracy) are achievable no matter which common input size is randomly chosen by the user to train the model. So that the OmClic based backdoor attack budget is reduced by \(M\times\) compared to the state-of-the-art camouflage based backdoor attack as a baseline. Significantly, the same set of OmClic based poisonous attack images is transferable to different model architectures for backdoor implant.
keywords: Camouflage attack, One-to-multiple, Backdoor attack, Clean-label data poisoning, Machine learning +
Footnote †: journal: Computers & Security
## 1 Introduction
The revealed backdoor attacks in 2017 [1; 2] on deep learning (DL) models are becoming one of the major barriers of DL trustworthy usage, especially in security-sensitive applications. A backdoored model works normally in the absence of the so-called trigger to be stealthy, but misbehaves once the trigger is present. For example, a backdoored facial recognition model still correctly recognizes Alice as Alice, Bob as Bob if either of them wears a black-framed eye-glass that is the trigger secretly set by the attacker. However, it misclassifies any person who wears this trigger into the Administrator e.g., with higher authorization. One major attack surface is from the data outsourcing scenario, where a DL model provider/producer outsources the data collection to third parties [3]. Data outsourcing is common due to the fact that DL training demands on large amounts of data. However, this requires intensive workforce involved with annotating large datasets or even generating them. Data curation task is thus often outsourced to a third party (e.g., Amazon Mechanical Turk) or volunteers. In this context, the data could be maliciously poisoned to insert the backdoor once the data is utilized to train a model.
According to the visual consistency between the image content and its corresponding annotation (i.e., label in classification task), data poisoning based backdoor implant can be divided into two categories: dirty-label poisoning and clean-label poisoning. Generally, the content of the image and its label are different for the dirty-label poisoned image. For example, a dog image stamped with a small trigger is labeled as cat. In contrast, the image content and its label of the clean-label poisoned images are consistent. More details on the dirty-label poisoning and clean-label poisoning can be found in Section 2.2. Almost a majority of existing studies are on the dirty-label poisoning [1; 4; 5; 6; 7]. However, the dirty-label poisoning attack is challenging to be survived when the attacker
cannot control the model training such as in the model-outsourcing scenario. When the user only needs to outsource the data collection or annotation task, the user will train the model by himself/herself. In that case it is common in real-world, the collected data can undergo human inspection to check whether the image content is consistent with the label. The dirty-labeled images will be rejected in this case. In addition, the reputation of the data provider can be infringed and penalized.
The clean-label poisonous images retain the labels to be consistent with the images' content. Thus, it can trivially bypass the visual auditing by the data curator. Therefore, the clean-label poisoning poses a realistic security threat to the data collection pipeline even when the curated data undergoes human inspections. However, the clean-label poisoning is less explored due to the stringent label consistency constraint. There are only few works in this research line. Almost all of them build upon the strategy of enforcing the difference between the input space and the latent space/representation, so-called feature collision. to create clean-label poisoned images [8; 9; 10; 11]. However, the major limitation of such clean-label attacks is model-dependence. That is the _attacker has to know the model architecture and even weights_ used by the victim user's model to determine the latent representation of the adversarial image, which is mandatory during the adversarial image optimization process. This restriction renders the conventional clean-label attack ineffective if the model architecture varies or the weights before the layer of latent representation are changed or the model is trained from scratch [12].
To our knowledge, only the camouflage attack [13] based clean-label poisoning is model-agnostic for inserting backdoors into DL models [14; 15]. The camouflage attack abuses the default image resizing function to create adversary images that are with visually clean labels (detailed in Section 2.3). To be effective, the image size fed into the model has to be known to the attacker. We note that this is a reasonable and practical knowledge assumption in real-world. Because the commonly used input size of popular model architectures are few and known to the public. For example, the commonly used input sizes of the ResNet are \(224\times 224\times 3\) and \(112\times 112\times 3\). The input sizes of different popular models are summarized in Table 1.
A crucial constraint of the data poisoning attack is the poison rate or the attack budget. The attack budget should be as small as possible to be stealthy and efficient to the attacker. For the former, if the poisoning rate is high, it means that the samples of the target class will be notably high, which could be suspicious even for the clean-label attack. For the latter, it means the attacker can spend less effort or time creating poisoned images. In this context, we note that the existing camouflage attack[13; 21] can only target a single model input size per attack image, which is inefficient given the fact that there are always several default input sizes of a popular model. To attack all input sizes simultaneously, the poisoning rate has to be increased as a function of the number of targeted input sizes. For example, if a 1% poison rate can implant a backdoor to the ResNet model given an input size, it requires a 3% poison rate, 3\(\times\) higher, to attack three common input sizes concurrently, which consequentially increases the attack budge and becomes less stealthy and efficient.
We address such a crucial limitation by crafting a camouflaged attack image that can target multiple model input sizes simultaneously to fundamentally obviate the requirement of linearly increasing the attacking budget or the poisoning rate upon the existing state-of-the-art (SOTA) camouflage attack [13]. The SOTA is incapable of targeting multiple input sizes but a single input size (detailed in Section 3.1). Consequentially, with the same attack budget, we are able to always implant a backdoor to the model as long as the user adopts any one of the common input sizes to train the models.
Our contributions are summarized as follows:
* We propose OmClic1, the first one-to-multiple camouflage attack, that can target multiple input sizes given a single crafted attack image. We formulate the attack image crafting with a multi-objective optimization to automate the attack image generation. Footnote 1: It can be pronounced as Oh My Click.
* We comprehensively evaluate OmClic with diverse types of images (i.e., facial images, landscape images) under various settings. Its outstanding performance is affirmed through quantitative and qualitative comparisons with the SOTA.
* We demonstrate the practicality of backdoor attacks leveraging the OmClic through extensive experiments on three datasets: PubFig, STL, and Tiny-ImageNet. The backdoor can always be successfully inserted compared to the baseline backdoor attack regardless of whether any of the targeted model input sizes are chosen by the victim user with the same poisoned set (i.e., only six attack images are sufficient in the facial recognition case study).
The rest of the paper is organized as follows. Some necessary background is presented in Section 2. Section 3 gives an overview of the OmClic, followed by elaborations on its implementations. Section 4 comprehensively and
\begin{table}
\begin{tabular}{c|c} \hline model & input size \\ \hline DenseNet [16] & 32, 112, 200, 224 \\ ResNet [17] & 112, 224, 336, 448, 560 \\ VGG [18] & 224, 256, 512 \\ AlexNet [19] & 256, 512 \\ EfficientNet [20] & 224 \\ \hline \end{tabular}
\end{table}
Table 1: Common input sizes of popular DL models.
quantitatively evaluates OmClic on diverse type of images with various setting considerations, as well as comparisons with the SOTA [13]. Backdoor attacks based on OmClic are presented and extensively evaluated in Section 5. We discuss OmClic enabled backdoor attacks further in Section 6, especially with providing an easy-to-deploy lightweight prevention method to mitigate the OmClic. Section 7 concludes this work.
## 2 Related Work
### Backdoor Attack Scenario
A backdoored model behaves normally for inputs without the trigger but misbehaves as the attacker-specified once the attacker presents his/her secretly chosen trigger in the input [3]. For example, supposing the trigger is a sun-glass, any person, e.g., person A, without wearing it will still be recognized as person A by the backdoored facial recognition model. However, he/she will be recognized as the administrator by the backdoored model once the sun-glass is worn. There are a number of real-world scenarios that can introduce backdoor into the DL model as long as the model or its training dataset can be tampered by the attacker. These means are model outsourcing [1], dataset outsourcing [8], distributed machine learning [22], pretrained model reusing [23], or even through vulnerable code called by the DL framework [24], and fault injection after model deployment [25].
### Data Poisoning based Backdoor
Data outsourcing is one of the three most common scenarios (i.e., the first three) compared to the last three scenarios. Due to the hardness of collecting some specific data (i.e., medical) or the intensive involved labor, it is common that a model trainer outsources the data collection or/and data annotation to third parties. For instance, the Amazon Mechanical Turk2 is such as platform where one can issue dataset outsource tasks. The annotation of a commonly used FLIC dataset [26] for object detection task was outsourced to Amazon Mechanical Turk. In addition, some data collections rely on volunteer contributions. Moreover, some large-scale datasets, e.g., ImageNet [27] are crawled from the Internet and annotated through crowdsourcing [27]. For all these cases, the data can be tampered with before being received by the data curator. A small fraction (i.e., 0.06% [28]) of tampered or poisoned data can essentially succeed in inserting a backdoor into a DL model trained upon it.
Footnote 2: [https://www.mturk.com/](https://www.mturk.com/)
Data poisoning can be generally divided into two categories: dirty-label poisoning and clean-label poisoning. Gupta et al. carried out two additional attacks in addition to the targeted attack: a random label flipping attack*and a random input data poisoning attack. The former is dirty-label poisoning, and the latter is clean-label poisoning. The difference between these two poisoning categories is following:
* Dirty-label poisoning. The labeling of samples is inconsistent with the semantics of these samples. which is trivially achievable by simply altering the label of a poisoned sample that contains the trigger. This is not stealthy for human inspection.
* Clean-label poisoning. It ensures the consistency between the poisoned image content and its annotated label. Thus, human inspector can not find any irregularity attributing the consistency.
To craft clean-label poisonous images, the majority of studies [8; 9; 10; 11] utilize the feature collision attack. For example, a poisoned face image of person A is labeled as person A, which is visually unsuspicious due to the consistency between the image content that is the input space or pixel space and the annotation. However, when it is fed into a DL model, its latent representation (i.e., from the first fully connected layer of a CNN model) in latent space is in fact equal to person B. This can be exploited to perform backdoor attacks through clean-label data poisoning [12]. That is, the feature of poisoned image A collides with image B in the latent space even though they are different in the input space. Generally, the perturbed/poisoned A's image feature representation is similar to any other person's face image (i.e., person B) _stamped with a trigger_ (i.e., sun-glass). The model trains on the poisoned dataset and learns a backdoor/association between the trigger and targeted person A, thus misbehaving backdoor effect to misclassify any person with the trigger to person A.
However, clean-label poisoning upon feature collision has a crucial limitation that a feature extractor to extract the latent representation should be known by the attacker. This means the attacker often needs to have white-box knowledge of the feature extractor (i.e., the victim model). Generally, poisonous image crafting in this context is (victim) model dependent.
### Camouflage Attack
The other means of crafting clean-label poisonous images is through the camouflage attack[13] by abusing the default resizing operation provided by commercial DL frameworks [14; 15; 29]. In [29], Chen et al. extended camouflage attacks by utilizing five types of pre-processing modules common in DL systems. For the camouflage attacked image, its visualization is different before and after resizing operation. Note that the image size (i.e., the resolution up to \(4032\times 3024\) for images taken by iPhone 13) is always larger than the input size of a given DL model (see Table 1). These large images will be downsized into the model's acceptable input size by calling the default resizing function before feeding them into the model for either training or inference. Therefore, an attacker can create an attack image (i.e., person A's face image) seen by the
data curator that will become the target image (i.e., person B/C's face image with a trigger) seen by the model. Here, the attack image retains its consistency between the image content and the annotation. Obviously, once a DL model trains on these poisoned images, it will be back-doored. So that it will classify any person with the trigger to person A who is the attacker-targeted person such as the administrator.
Despite this clean-label poisoning attack exhibiting a main merit of being independent on DL models, it is dependent on the model input size targeted. For example, if the targeted size is \(224\times 224\times 3\), its effect will not function if the model user chooses any other input size e.g., the other common option of \(112\times 112\times 3\) (see Table 1). When performing backdoor attacks, the attacker has to linearly increase its poison rate (i.e., using more poisonous images) if the attacker targets multiple model input sizes. This is undesirable as it is less stealthy and increases the attack budget. In the following, we present OmClic that can cover multiple model input sizes given the same poisonous image without increasing the poisoning rate at all.
## 3 One-to-Multiple Clean Label Image Camouflage
### Overview
The overview of the One-to-Multiple Clean Label Image Camouflage (OmClic) is shown in Figure 1. The aim is to disguise multiple target images (i.e., \(k\)\(T\)s) in the same source image (\(S\))--\(k=3\) in the example. The manipulated source image \(S\) is the attack image \(A\) that will be received by the victim user who uses it to train a DL model. The attack image \(A\) is visually close to the source image--its annotation (i.e., label) is consistent with its content (i.e., the lady is labeled with an correct name). However, once it is used to train a DL model, its content becomes semantically similar to the target image \(T\) due to the abuse of the default scale function provided by mainstream DL frameworks. More precisely, \(D_{1}\approx T_{1}\) where \(D_{1}=\)scale\({}_{1}(A)\). By stamping a trigger on a fraction of different target images \(T\)s before disguising each into an attack image \(A\), a back-door will be inserted into the downstream DL models, as experimentally evaluated in Section 5.
In this context, the key to OmClic is to strategically craft the attack image. The OmClic aim is to disguise \(k\) target images rather than a single target image into the source image as performed by Xiao _et al._, the SOTA [13]. The \(k\) target images can have different semantic contents (i.e., faces of different persons or faces of the same person but at different angles), or different image sizes (i.e., the face of the same person at the same shooting setting but different resolution/size), or a combination of above two scenarios, as exemplified in Figure 1.
**Challenges and Our Solution.** Intuitively, the methodology devised by Xiao _et al._[13], exchangeablely referred to as SOTA, can be consecutively applied to each of these \(k\) target images, hopefully, to gain an attack image retaining the deceive effect. However, our trials showed this is not immediately applicable. Firstly, the disguising operation tends to often fail due to the non-existence of optimization solution under the relatively too strong constraints set by the SOTA. Generally, this is because the SOTA transforms the attack into a convex optimization problem. Once the constraints (i.e., the perturbation amplitude on the attack image and the difference between the output image resized from the attack image and the target image) are enforced, it might not always converge to a satisfactory solution, thus causing a failure. Secondly, the SOTA camouflage is extremely computationally heavy, which renders unbearable time overhead, especially for relatively large-size attack images, even when camouflaging merely a single target image. Generally, this is because the SOTA solves the pixel perturbation in a fine-grained manner, e.g., line by line of the image. This inevitably invokes the convex-concave programming toolkit much more frequently, rendering costly computation (i.e., the overhead is dependent on the image size).
The OmClic resolves the above shortcomings through two major means. Firstly, we transform the OmClic camouflage attack into a distinct multi-objective optimization problem [30]. This overcomes frequent failure of the SOTA during the optimization process. Note that the multi-objective optimization naturally fits our one-to-multiple attack, since multiple target images have to be disguised simultaneously. Secondly, we solve the pixel perturbation per channel (i.e., a colorful image has three channels). Therefore, the number of invocations of the optimization toolkit is independent of the image size, and importantly, extremely less (i.e., only three invocations are required for a colorful image). Consequentially, the computation load of OmClic is very efficient.
### Implementation
We first define some notations. The \(m\) and \(n\), respectively, denote the number of rows and columns of source image size, and \(c\) denotes the number of channels--in particularly, \(c=3\) for colorful images. Similarly, \(m_{j}\) and \(n_{j}\) denote the \(j_{\text{th}}\in\{1,...,k\}\) image size of the \(j_{\text{th}}\) target image. Note in the camouflage attack, the target image size is usually smaller than that of the source image. This is aligned with the fact that image downscaling is more common when training the DL model. \(a\) denotes the pixel value, which should be in the range of [0,255]. \(L_{j}\) and \(R_{j}\)
Figure 1: OmClic overview. Three target images with different semantic contents and sizes are used for example.
respectively, denote the left and right constant coefficient matrix when a target image is resized, see the Eq 2. Note that \(L_{j}\) and \(R_{j}\) are deterministic once the \(m\), \(n\), \(m_{j}\), and \(n_{j}\) are given--they are known in camouflage attack.
Our main purpose is to find the minimum value \(\Delta\), satisfying that the attack image looks visually same as source image. For achieving the least distance between attack image \(A\) and source image \(S\) on whole image level, we use the Euclid norm \(L_{2}\) as a constrain. In this context, the relationship between \(A\) and \(S\) is formalized to be:
\[\begin{split}& A_{m\times n}=S_{m\times n}+\Delta\\ &\texttt{Obj:min}(\|\Delta\|_{2})\end{split} \tag{1}\]
To solve \(\Delta\), we further formalize the scaling process. Since the scaling size (i.e., the output image size) is fixed, the \(\mathsf{Scale}\) operation can be expressed as:
\[\begin{split}&\mathsf{Scale}_{j}(A_{m\times n})=L_{m_{j}\times m }*A_{m\times n}*R_{n\times n_{j}}=T_{m_{j}\times n_{j}},\end{split} \tag{2}\]
where \(j\) is for the \(j_{\text{th}}\) target image. \(L_{m_{j}\times m}\) and \(R_{n\times n_{j}}\) are two coeffecient matrices that can be stably solved [13] given the known scaling size.
Once the scaling size is fixed, the scaling coefficient is stable. From Xiao _et al_[13], this coefficient can be inferred from input and output pairs. For example, the input can be the source image while the output can be the target image or vice versus. In other words, the image content is not matter, the input and output sizes matter.
First of all, we can build the relationship between input and output pairs:
\[\begin{split}& L_{m^{\prime}\times m}*(I_{m\times m}*IN_{max})=L_{m^{ \prime}\times m}*IN_{max}\\ &(I_{n\times n}*IN_{max})*R_{n\times n^{\prime}}=R_{n\times n^{ \prime}}*IN_{max},\end{split} \tag{3}\]
where \(I_{m\times m}\) and \(I_{n\times n}\) are both identity matrices. And \(IN_{max}\) stands for the max element in the source image (i.e., it can be any scalar excepting 0 and 1).
For example, by setting \(S=I_{m\times m}*IN_{max}\) and scaling it into an \(m^{\prime}\times m\) image \(D_{m^{\prime}\times m}\), we can infer \(L_{m^{\prime}\times m}\) since:
\[\begin{split}& D=\mathsf{Scale}(S)=\texttt{unsigned int}(L_{m^{\prime}\times m}*IN_{max})\\ &\to L_{m^{\prime}\times m(appr)}\approx D/IN_{max}\end{split} \tag{4}\]
Since division is a finite decimal, Eq. 4 brings a slight precision loss. To ensure that the sum of elements in each row of the coefficient matrix is one, normalization is applied per row in the coefficient matrix to make it accurate.
\[\begin{split}& L_{m^{\prime}\times m(appr)}[i,:]=\frac{L_{m^{ \prime}\times m(appr)}[i,:]}{\sum_{j=0}^{m-1}(L_{m^{\prime}\times m(appr)}[i,j] )}\\ &(i=0,1,\cdots,m^{\prime}-1)\end{split} \tag{5}\]
To this end, we leverage multi-objective optimization to solve this \(\Delta\) when \(\texttt{OmClic}\) simultaneously disguises \(k\) target images with different sizes into one source image. Eventually, the best result can be found by solving the eventual objective optimization expressed by the following Eq 6.
\[\begin{split}& A_{m\times n}=S_{m\times n}+\Delta\\ &\mathsf{Scale}_{j}(A_{m\times n})=T_{m_{j}\times n_{j}}\quad(j= 1,2,\cdots,k)\\ &\epsilon_{j}=\|\mathsf{Scale}_{j}(A)-T_{j}\|_{2}\quad(j=1,2, \cdots,k)\\ &\forall a\in A\quad 0\leq a\leq 255\\ &\texttt{Obj:min}(\|\Delta_{i}\|_{2}+\epsilon_{1}+\cdots+ \epsilon_{k}).\end{split} \tag{6}\]
Note \(a\) is the pixel value of the attack image, its range is within [0, 255].
``` Input:Source: \(S\in\mathbb{N}_{m\times n\times c}\); \(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
ages, animal images, and landscape images, are utilized to comprehensively evaluate the OmClic. Experimental evaluations of OmClic enabled model agnostic backdoor attacks are deferred to Section 5.
### Different Target Images with Different Output Sizes
As shown in Figure 2, we embedded three different target images (e.g., dog or cat) into one source image. Three different sizes are \(64\times 64\times 3\), \(96\times 96\times 3\) and \(114\times 114\times 3\), respectively. The source image has a size of \(448\times 448\times 3\) and a larger size of \(1024\times 1024\times 3\) has also been evaluated. In this experiment, the scale function is set to be NAREST.
Firstly, the attack image (i.e., second column) is similar to the source image (i.e., first column). Secondly, the output images (i.e., third to fifth columns) are visually similar to their corresponding target images. In addition, we note that when the size of the source image is small, \(448\times 448\times 3\), there are some perceptible artifacts to the output image (i.e., the scaled dog image with a size of \(96\times 96\times 3\)). The artifact can be mitigated when the size of the source image increases e.g., \(1024\times 1024\times 3\) in the second row. This means the difference between the target image and the output image becomes small.
The reason is that the performance of the image scaling attack is dependent on the ratio of the source image and the target image. From Nyquist-Shannon theorem [31], a signal \(s(t)\) is feasible to reconstruct from a discrete number of sample points while the relationship between sampling rate \(f_{t}\) and highest frequency \(f_{max}\) is \(f_{t}\geq 2\cdot f_{max}\)[21]. the larger ratio between the source image and the target image, the better performance the camouflage attack performs with less difference between the output image and the target image. So that there is nearly no notable perturbation on the three output images scaled from the attack image with the size of \(1024\times 1024\times 3\).
### Same Target Image with Different Output Sizes
Here, we implant three visually same target images but with different sizes \(64\times 64\times 3\), \(96\times 96\times 3\), \(114\times 114\times 3\) into one source image, which formed attack images (i.e., second column) are shown in Figure 3. The target images are the same person's face images but with different resolutions in this example.
Even though a small size source image \(448\times 448\times 3\) is used, there are nearly no perceptible artifacts on these three output images (i.e., columns 3, 4, and 5). There are two potential reasons. Firstly, all target images are visually the same. Secondly, the target images and the source image are all face images, which similarity is also higher than that in Figure 2, where the source image (i.e., dolphin) is quite distinct from those target images (i.e., dog or cat).
This implies that semantic similarity between the source image and target image, or/and similarity among targets image can exhibit a better OmClic deceive effect.
In addition, note that a larger source image size is beneficial to the removal of the artifacts brought to the attack image. More precisely, when looking closer (zooming in), the artifacts in the \(448\times 448\times 3\) attack image are perceptible but eliminated when \(1024\times 1024\times 3\) source image is utilized.
### Same Target Image with Different Resize Functions
In Figure 4, we set up the case when the same target image is resized by different resize functions into different output sizes. During the attack image crafting, the scale function NAEAREST is used to disguise the same target image with different sizes \(64\times 64\times 3\) (i.e., third column) and \(96\times 96\times 3\) (i.e., fourth column) into the source image.
On one hand, if a different resizing algorithm e.g., LANCZOS, is chosen to rescale the attack image to gain the output image e.g., \(96\times 96\times 3\) in the third column, the output image is semantically similar to the source but not the target image intended by the attacker. On the other hand, if the attack image is resized to the output image of a \(64\times 64\times 3\) size with the same algorithm of NAEAREST, the output image as expected is nearly the same as the target image. We have evaluated other combinations, e.g., NAEAREST is used during attack image crafting and a different LANCZOS function is used to resize the attack image. We found that the camouflage effect can work only when both resize functions are the same during the attack image creation and attack image resize in all our experiments.
Figure 3: Same target image with different sizes. Face images are used.
Figure 2: Different target images with different sizes. Animal images are used.
### Number of Disguised Target Images
Here, we are interested in the maximum number of target images that can be disguised into the source images. In Figure 5, we embed up to \(k=8\) target images into a source image. We have the following observations. Firstly, a larger source image size is preferable to disguise multiple target images. When the \(1024\times 1024\times 3\) source image is used, the semantics of not only the attack image but also each of up to \(k=8\) output images can be held reasonably. Secondly, we do observe increased artifacts in the source image when \(k\) increases. Thirdly, the ratio between the source image size and the target image size is preferred to be large to facilitate the OmClic. As can be observed in the third and fourth rows, when the maximum image size of the target image approaches the source image size, the attack image is essentially visually close to the target image.
### Computational Overhead
Here, we compare the OmClic computational overhead with Xiao _et al._[13], which is measured by the time of producing the attack image when a _single_ target image is embedded. Experiments are performed on the same machine with a CPU of Intel(R) Xeon(R) Gold 6230 at 2.10 GHz and 32 GB memory.
Figure 6 details the time cost. The \(x\)-axis is the target image size. It can be seen that the proposed OmClic substantially outperforms SOTA [13]. The improvement is up to \(30\times\). For example, when the source image size is \(448\times 448\times 3\) and the target image size is \(114\times 114\times 3\), the SOTA costs 1893 s while OmClic only requires 67 s. The efficacy is improved by up to \(28\times\). Because OmClic leverages i) a more efficient multi-objective optimization and ii) per image channel optimization rather than per line optimization in the SOTA.
### Similarity Between Source and Attack Image
Here, we focus on quantifying the similarity between the source image and the attack image, since this represents the deceive effect in our scenario. Then we quantitatively compare OmClic and the SOTA. We note that when the camouflage is exploited for the backdoor attack in our work, the similarity between the target image and its corresponding output image after scale is not stringent. The reason is that the user would not inspect the output image--the user inspects the attack image. As long as the backdoor can be successfully inserted, even perceptible artifacts on the output image are not a matter.
Figure 4: Same target image with different resize functions. Landscape images are used.
Figure 5: Number of disguised target images. Face images are used.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{SSIM} & \multicolumn{4}{c}{MSSSIM} & \multicolumn{4}{c}{UQI} & \multicolumn{4}{c}{PSNR} \\ \cline{3-14} Types & \multicolumn{2}{c}{Xiao _et al._} & Ours & Xiao _et al._ & Ours & Xiao _et al._ & Ours & Xiao _et al._ & Ours & Xiao _et al._ & Ours & & & & & & Ours \\ & [13] & & & & & & & & & & & & & & & & & \\ \cline{2-14} & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 3 \\ \hline \multirow{2}{*}{Face} & 448 & 0.744 & 0.742 & 0.565 & 0.472 & 0.942 & 0.942 & 0.887 & 0.844 & 0.90 & 0.90 & 0.833 & 0.79 & 27.469 & 27.483 & 22.136 & 19.422 \\ \cline{2-14} & 1024 & 0.889 & 0.905 & 0.755 & 0.662 & 0.979 & 0.982 & 0.949 & 0.917 & 0.997 & 0.975 & 0.929 & 0.888 & 33.412 & 34.447 & 29.307 & 26.415 \\ \hline \multirow{2}{*}{Animal} & 448 & 0.655 & 0.660 & 0.47 & 0.38 & 0.936 & 0.936 & 0.873 & 0.821 & 0.971 & 0.971 & 0.946 & 0.925 & 25.102 & 25.262 & 19.819 & 17.096 \\ \cline{2-14} & 1024 & 0.881 & 0.865 & 0.665 & 0.567 & 0.982 & 0.980 & 0.943 & 0.907 & 0.994 & 0.992 & 0.979 & 0.966 & 33.518 & 32.113 & 26.977 & 24.079 \\ \hline \multirow{2}{*}{Landscape} & 448 & 0.734 & 0.726 & 0.564 & 0.474 & 0.944 & 0.942 & 0.892 & 0.847 & 0.839 & 0.838 & 0.801 & 0.778 & 26.574 & 26.413 & 21.262 & 18.547 \\ \cline{2-14} & 1024 & 0.917 & 0.889 & 0.722 & 0.632 & 0.987 & 0.979 & 0.942 & 0.909 & 0.990 & 0.954 & 0.873 & 0.818 & 34.631 & 33.551 & 28.403 & 25.515 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative similarity comparison between Xiao _et al._[13] and OmClic.
We use three semantically same target images but with different sizes of \(64\times 64\times 3\), \(96\times 96\times 3\), \(114\times 114\times 3\) for OmClic and only one size \(64\times 64\times 3\) for SOTA. The case #1 of SOTA means embedding the \(64\times 64\times 3\) sized target image into the source image. The case #1, #2, and #3 of OmClic, means disguising one (in particular, the \(64\times 64\times 3\) sized image), two, and three target images into the source images, respectively. Note the SOTA is challenging (i.e., time-consuming and unstable even applying it sequentially per target image) to embed multiple target images into the same source image, we do not evaluate it on multiple target images.
Results are detailed in Table 2, where four metrics (Structural Similarity Index (SSIM) [32], Multi-scale Structural Similarity Index (MSSSIM) [33], Universal Quality Image Index (UQI) [34] and Peak Signal-to-Noise Ratio (PSNR) [32]) are used. Firstly, when a single target image is disguised, the similarity performance of the OmClic is almost the same as the SOTA in all cases. Therefore, the OmClic achieves the same deceptive effect compared to the SOTA while OmClic is more efficient (cost much less time). Secondly, when the number of target images increases, the similarity performance sees gradual decreases, which is under expectation. Thirdly, the usage of a source image with large image size (i.e., 1024 versus 448) compensates for the similarity deterioration. This agrees with the observation in Section 4.4, where a large source image is able to accommodate a higher number of target images while retaining the semantic consistency of the attack image. Last, the semantic similarity between the target image and the source image is inversely related to the performance of the SSIM, MSSSIM, UQI and PSNR. Since images of the animal dataset are more discrepant, the animal dataset exhibits the worst performance, whereas face images exhibit the best.
## 5 OmClic enabled Backdoor Evaluation
We now evaluate the OmClic-enabled backdoor attack against DL models. Generally, the OmClic is exploited to disguise trigger-carrying target images to poison the training dataset used to train the DL model, thus inserting a backdoor into the DL model.
### Threat Model
The attacker can create attack images through the OmClic to disguise trigger-carrying images. More specifically, the attacker has access to a small fraction of the dataset used by the victim--a less than 0.5% poison rate was sufficient to insert backdoor as shown in [28; 15]. This is realistic in the data outsourcing scenario where the dataset is crawled from public sources or contributed by volunteers or collected by a third party [14; 15]. The attacker has knowledge of the input size of the DL model. This is reasonable as the number of common input sizes is extremely limited and is publicly known, as summarized in Table 1. Notably, the OmClic is designed to _compromise multiple input sizes concurrently through the same attack image_. However, the attacker has no control over the training process, and thus cannot interfere with the training at all.
As for the victim data user, he/she mixes the data returned from the attacker and uses it to train the DL model. The user fully controls the training process. The user who is the data curator can inspect the received data to identify and reject the malicious image that exhibits inconsistency between its content and its label. Note that the user is not inspecting data after the scale operation since this is a default operation of the existing DL frameworks, as assumed [13; 14].
### Experiment Setup
**Dataset.** We consider three datasets including PubFig [35], STL [36] and Tiny-ImageNet [37]. The PubFig consists of \(58,797\) images of 200 people crawled from the Internet. Since some URLs for downloading text file are invalid now, we selected top-60 people (sorted by amount) as the PubFig dataset in our experiments.
The STL dataset has 10 classes. The training and testing sets contain 5,000 and 8,000 images with size of \(96\times 96\times 3\), respectively. The Tiny-ImageNet has 200 classes. To reduce computation time, we only use 10 classes in Tiny-ImageNet.
The image sizes are \(256\times 256\), \(96\times 96\) and \(64\times 64\) for PubFig, STL and Tiny-ImageNet, respectively. And the experimented model acceptable input sizes (or compromised input sizes) are \(96\times 96\), \(112\times 112\) and \(224\times 224\) for all datasets considering the fact that these sizes are common for computer vision models. Whenever the image size and the compromised model input size mismatches, the former is resized to fit the latter size. More specifically, downsampling is used for PubFig and the up-sampling process
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Datasets & \begin{tabular}{c} \# of \\ labels \\ \end{tabular} & \begin{tabular}{c} \# of train \\ images \\ \end{tabular} & \begin{tabular}{c} \# of test \\ images \\ \end{tabular} &
\begin{tabular}{c} Image \\ size \\ \end{tabular} \\ \hline STL & 10 & 5,000 & 8,000 & 96\(\times\)96\(\times\)3 \\ \hline PubFig & 60 & 4,921 & 1,202 & 256\(\times\)256\(\times\)3 \\ \hline Tiny-ImageNet & 10 & 5,000 & 500 & 64\(\times\)64\(\times\)3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dataset summary.
Figure 6: Time overhead comparison between OmClic and Xiao _et al._
is applied to STL and Tiny-ImageNet. For all poisoned images, their image size is set to be \(448\times 448\times 3\). For OmClic enabled backdoor, the first class of each of three datasets is the source classes (i.e., the attacker target class from the backdoor attack perspective), and the other classes as the target classes (i.e., note this target class refers to the images that the attacker wants to hide in the OmClic attack, should not be confused with the target class in the backdoor attack). A summary of the dataset settings is provided in Table 3.
**Model Architecture.** The ResNet18 [17] and VGG16 [18] are utilized to comprehensively evaluate the OmClic enabled backdoor. We evaluate the OmClic based backdoor on the basis of the state-of-the-art accuracy. Specifically, PubFig, STL and Tiny-ImageNet achieve accuracy of \(95.7\%\), \(92.6\%\) and \(89.1\%\) respectively, given the model input size of \(224\times 224\times 3\). These clean model accuracies, serve as baseline, are obtained when training on the clean dataset.
**Metrics.** Two common metrics of clean data accuracy (CDA) and attack success rate (ASR) are utilized to quantitatively measure the backdoor performance [3].
The CDA is the probability of a non-trigger carrying image is correctly classified into its ground-truth label by the backdoored model. The CDA of a backdoored model should be similar to the CDA of its clean model counterpart. The ASR is the probability of a trigger carrying image being misclassified into the attacker preset backdoor target class. Higher the ASR, better the backdoor attack effect to the attacker.
### Results
Before devling into the results of OmClic based backdoor attack performance, we give the baseline or plain backdoor attack performance for latter comparisons.
#### 5.3.1 Plain Backdoor
As for the plain backdoor, we randomly select few images (i.e., \(59\) images) from the \(1_{\mathrm{th}}-59_{\mathrm{th}}\) classes for PubFig task. For as the STL and Tiny-ImageNet, we select those images from the \(1_{\mathrm{th}}-9_{\mathrm{th}}\) classes as there are only ten classes (one class is the targeted class). For those selected images, we stamp a blue square on bottom-left corner as trigger to form poisoned images, which labels are correspondingly changed to the targeted label \(0_{\mathrm{th}}\) class, see the backdoor overview in Figure 7 (a). This data poisoning process is a typical means of inserting backdoor [1; 38], where the content and the label of the poisoned image is obviously inconsistent, which can be trivially captured by human auditing. Because this is a dirty-label image poisoning attack--the trigger-carrying label-altered images (see Figure 7 (a)) are directly exposed to the human inspector.
Instead of training the model from scratch, we leverage transfer learning for expedition. The transfer learning is set with \(100\) epochs, \(0.0001\) learning rate and decay learning rate. For ResNet18 and VGG16, the pretrained models are both trained on ImageNet [39].
For each dataset, ten trials are repeated and the average result is reported. As shown in Figure 8, the ASR of the plain backdoor, namely plain ASR, is \(100\%\) for all datasets. For the \(224\times 224\times 3\) model input size, the CDA of the backdoored models are \(95.8\%\), \(92.4\%\) and \(89.2\%\) for PubFig, STL and Tiny-ImageNet, respectively. As affirmed in Figure 8, the CDA of the backdoored model is always similar to that of the clean model.
#### 5.3.2 OmClic based Backdoor
In this context, the OmClic is utilized to create poisoning image that its content is consistent to its label. As exemplified in Figure 7 (b) and Figure 9 with face recognition task, we randomly select three images (three rightmost faces in Figure 9 from e.g., person B, C, D) with each from a different class and with a different size. For each of this image, we stamp a trigger on it to gain the trigger-carrying target image. We then randomly select an image (left-most face from e.g, person A) as source image to disguise all these three trigger-carrying target images to form an attack image (second left-most face, e.g., A\({}^{\prime}\) in
Figure 7: Overview of plain backdoor as baseline and OmClic based backdoor.
Figure 9), which is a poisonous image in the backdoor attack. Here, person A is the target person. In other words, any person's face with the trigger will be misclassified into person A once the backdoored model is deployed for inference. Note that the content and its label of the A\({}^{\prime}\) are consistent, which can trivially evade human inspections. However, for the model, it sees trigger-carrying person B, C, D during training, but deems their labels as person A, so that a strong association between the trigger and infected class A is learned, consequentially inserting the backdoor successfully.
We have repeated the experiments for ten times of the OmClic based backdoor attack and report the average. The first row of Figure 8 depicts the results of the PubFig on all three evaluated compromised model input sizes. Taking \(224\times 224\times 3\) as an example, the compromised model input size means the victim model accepts image size of \(224\times 224\times 3\) that the victim user has to resize the training image size to it through default resize function of the DL pipeline. To be more specifically, the CDA of OmClic backdoored models are 91.6%, 92.5% and 95.8% for model input size of \(96\times 96\times 3\), \(112\times 112\times 3\), and \(224\times 224\times 3\), respectively. Each CDA of OmClic based backdoor is almost similar to the CDA of the plain backdoor attacked model and the clean model counterpart. As for the ASR, it reaches to 100% for each of these three datasets, again, same to the plain backdoor attack.
As for the other two datasets of STL and Tiny-ImageNet, the results are detailed in the second and third rows of Figure 8. Generally, as we can see, they have the same trend as the above PugFig. Therefore, we can conclude that the OmClic based backdoor is able to attack multiple model input sizes and achieve the same attack performance as the plain backdoor.
#### 5.3.3 Poisoning Rate Effect
Here, we reduce the poisonous images with the PubFig dataset. In previous experiments, we have used 59 poisonous images. Specifically, each of 59 OmClic target images is selected from \(1_{\text{th}}-59_{\text{th}}\) classes (one image per class) and disguished by one different source images from the backdoor infected \(0_{\text{th}}\) class. The total number of images in the \(0_{\text{th}}\) is 90. Now we reduce the number of target images to be \(20,30,40,50\)--so that some of the \(1_{\text{th}}-59_{\text{th}}\) classes are not used to provide target images. The model architecture is still ResNet18 and model input size is set to be \(224\times 224\times 3\).
Results are detailed in Figure 10. As expected, the ASR is reducing as the number of poisonous image decreases. Nontheless, the ASR is still up to 95.4% even when the poisonous images is reduced by 50% (from 59 to 30). This corresponding to a poison rate of 0.61% out of all 4,921 PubFig training images in total (i.e., 30/4921).
## 6 Discussion
### Model Agnostic
The poisonous images crafted through OmClic is equally effective against different model architectures as long as its
Figure 8: Evaluating OmClic based backdoor on ResNet18 with multiple input sizes.
Figure 10: Evaluating the effect of different poisoning rate in OmClic based backdoor. Model and dataset are ResNet18 and PubFig respectively.
Figure 9: Clean-label image poisoning with OmClic to insert backdoor. Image att is the poisonous image with same label of image src seen by the data curator. However, once image att is used for model training after applying image-downsizing, one of the three right-most images is seen by the model depending on the model input size setting while its label is still same to src.
model input size falls under the compromised input sizes. Here, we use the same set of OmClic poisoned PubFig images in Section 5.3.2 to evaluate the backdoor effectiveness when these images are used to train a VGG16 model--ResNet18 is evaluated in Section 5.3.2.
The results are detailed in Figure 11. It is obvious that these set of poisonous images successfully insert the backdoor into the VGG16 model. More specifically, firstly, the CDA of the OmClic based backdoor is almost same to that CDA of plain backdoor and clean model without backdoor. Secondly, the ASR of the OmClic based backdoor is same to that of the plain backdoor. These hold for any of the three targeted model input sizes of 96, 112, and 224, respectively. Therefore, the OmClic based poisonous images are transferable to different model architectures as long as its one of targeted model input sizes is chosen by the model user for training.
### Backdoor Variant
Above experiments focus on the common source-agnostic backdoor attack enabled by the OmClic, where input from any class carrying the trigger will be misclassified into the compromised class. We note that OmClic can be essentially exploited to conduct advanced backdoor variants such as the source-specific backdoor attack (SSBA) [40, 41] that is harder to be countered. In addition, multiple backdoors with each targeting a differing class [38, 42] can be performed through OmClic.
We take an exemplified methodology description through SSBA, where input from some specific source classes carrying trigger can activate the backdoor. In other words, input from other non-source classes cannot activate the backdoor even it carries the trigger. It is trivial to perform SSBA by exploiting OmClic. We use the face recognition as an example. The poisonous samples of the SSBA requires a so-called cover sample to suppress the backdoor effect of the non-source classes in the presence of the trigger. Suppose person A is source class and person B is non-source class, person D is the infected class, a natural sun-glass (or i.e., ear ring) as a trigger, firstly, some non-cover images are created following the same procedure in Section 5.3.2 by embedding sun-glass wearing person A images into the images of person D through OmClic. For cover images, we simply mix sun-glass wearing person B images into the training dataset. There is in fact no need to apply OmClic in this context, because the sun-glass wearing person B images are non-suspicious at all as their _label does not need to be altered_. Once the face recognition model is trained on the non-cover and cover poisonous samples, it will still correctly classify person B images even when person B wears the sun-glass trigger but misbehaves to classify person A into person D when person A wears the sun-glass trigger--the backdoor effect is further associated to specific class(es).
We have performed experiments on above described OmClic based SSBA attacks. More precisely, 50 non-cover samples all from \(1_{\text{th}}\) person (i.e., the source-class) are created or camouflaged into the \(0_{\text{th}}\) person who is the backdoor infected category. For cover samples, sun-glass wearing person (all person except \(1_{\text{th}}\) person, and label not been altered) are taken into consideration, where the number of cover-samples varies. Generally, all sun-glass wearing \(1_{\text{th}}\) person should be misclassified into \(0_{\text{th}}\) person, while all sun-glass wears persons from other person categories should be still correctly classified into its ground-truth category, e.g., \(2_{\text{th}}\) person into \(2_{\text{th}}\) person. Table 4 shows the OmClic based SSBA performance. On one hand, It can be seen that by increasing the cover samples, the source class ASR will gradually drop. This is under expectation. Note there is only one source class e.g., \(1_{\text{th}}\) person. If too many cover samples are used, the strong association between the presence of the trigger and the targeted class will be diminished, thus suppressing the ASR to some extent. On the other hand, for a similar reason, when the number of cover samples increases, the non-source class ASR decreases. The ratio between the cover samples and non-cover samples requires proper setting. When the number of cover-sample is set to 10, the source class ASR is up to 97.2%, while the non-source class ASR is still sufficiently low to be 1.7% and CDA of cover samples is still similar to the clean model CDA.
### Countermeasures
Here we discuss potential countermeasures against OmClic and recommend some lightweight prevention methods that
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{Number of cover samples} \\ \cline{2-5} & 50 & 30 & 20 & 10 \\ \hline Clean CDA & 95.6\% & 95.5\% & 95.5\% & 95.6\% \\ \hline Cover sample & \multirow{2}{*}{95.5\%} & \multirow{2}{*}{95.1\%} & \multirow{2}{*}{94.4\%} & \multirow{2}{*}{94.2\%} \\ CDA & & & & \\ \hline Source class & \multirow{2}{*}{75.8\%} & \multirow{2}{*}{80.1\%} & \multirow{2}{*}{84.2\%} & \multirow{2}{*}{97.2\%} \\ ASR & & & & \\ \hline Non-source class ASR & \multirow{2}{*}{0.6\%} & \multirow{2}{*}{0.8\%} & \multirow{2}{*}{1.1\%} & \multirow{2}{*}{1.7\%} \\ \hline \hline \end{tabular}
\end{table}
Table 4: OmClic based source-specific backdoor attack performance.
Figure 11: Evaluating OmClic based backdoor on VGG16 model with PubFig dataset.
are easy-to-use to mitigate the OmClic based backdoor security threat. Note, it is possible to apply backdoor defense to counter the OmClic based backdoor attack, but it is often expensive or requiring deep learning expertise [43]. We focus on the countermeasures directly countering against the camouflage attack, thus consequentially thwarting the backdoor attack. There are existing camouflage detection methods such as the Decamouflage [44] that identifies camouflage images through automatically examining the pixel domain or the spectra domain of a given received image. However, it requires to inspect each image, still incur certain computational cost. There are also prevention countermeasures by adjusting the resize function [21] to harden, essentially rendering the feasibility of crafting effective attack images. However, this requires change of existing resize functions and can result into increased computation intensity of the resizing operation.
We have identified an lightweight and easy-to-use prevention method by simply applying an intermediate resizing operation, namely InterResize. Specifically, it resizes the received image e.g., A with a random height/weight into an intermediate image A\({}_{\text{iterm}}\), before consecutively resizing it into a smaller image A\({}_{\text{small}}\) with the ultimate model input size of the given model. Here, the width/height the intermediate image A\({}_{\text{iterm}}\), should not be the integral multiple of the width/height of the image A\({}_{\text{small}}\). For example, the width and height of A\({}_{\text{small}}\) is \(96\times 96\), the width and height of A\({}_{\text{iterm}}\) can be set to any value except the integral multiples such as \(\{192\times 192,288\times 288,288\times 96,288\times 192,\cdots\}\). In case of the integral multiple is held, the A\({}_{\text{small}}\) may still have obvious artifacts of the target image-- see an example in Figure 12 (in particular, top row). By applying this simple operation, image-scaling attack effect will be disrupted because of the application of a different width/height. We have experimentally affirmed the practicality of this prevention method, where the output image is always the same as the source image not the attacker-intended target image in the OmClic attack, see an example in Figure 12 (in particular, bottom row).
## 7 Conclusion
We have proposed OmClic that allows simultaneously disguising multiple target images into a source image to form an attack image (similar to source image), which is achieved by abusing the default resizing operation provided by popular DL frameworks through a devised muti-objective optimization. Compared to existing SOTA, OmClic achieves the same deceive effect in addition to its multiple image disguising capability. Moreover, OmClic is substantially reduces the computational cost that expedites the camouflage attack image crafting. The OmClic enabled backdoor attack through clean-label poisonous images can compromise a given model regardless of the user chosen model input size as long as it is covered by the OmClic. Extensive experiments have validated the same efficacy of the OmClic based backdoor compared to baseline attacks. Importantly, we have provided a lightweight and easy-to-deploy OmClic prevention approach to thwart such attacks.
|
2309.07517 | Lattice Boltzmann methods for combustion applications | The lattice Boltzmann method, after close to thirty years of presence in
computational fluid dynamics has turned into a versatile, efficient and quite
popular numerical tool for fluid flow simulations. The lattice Boltzmann method
owes its popularity in the past decade to its efficiency, low numerical
dissipation and simplicity of its algorithm. Progress in recent years has
opened the door for yet another very challenging area of application:
Combustion simulations. Combustion is known to be a challenge for numerical
tools due to, among many others, the large number of variables and scales both
in time and space, leading to a stiff multi-scale problem. In the present work
we present a comprehensive overview of models and strategies developed in the
past years to model combustion with the lattice Boltzmann method and discuss
some of the most recent applications, remaining challenges and prospects. | S. A. Hosseini, P. Boivin, D. Thevenin, I. Karlin | 2023-09-14T08:34:49Z | http://arxiv.org/abs/2309.07517v2 | # Lattice Boltzmann methods for combustion applications
###### Abstract
The lattice Boltzmann method, after close to thirty years of presence in computational fluid dynamics has turned into a versatile, efficient and quite popular numerical tool for fluid flow simulations. The lattice Boltzmann method owes its popularity in the past decade to its efficiency, low numerical dissipation and simplicity of its algorithm. Progress in recent years has opened the door for yet another very challenging area of application: Combustion simulations. Combustion is known to be a challenge for numerical tools due to, among many others, the large number of variables and scales both in time and space, leading to a stiff multi-scale problem. In the present work we present a comprehensive overview of models and strategies developed in the past years to model combustion with the lattice Boltzmann method and discuss some of the most recent applications, remaining challenges and prospects.
+
Footnote †: journal: Progress in Energy and Combustion Science
###### Contents
* 1 Introduction
* 2 Basic concepts
* 2.1 Brief overview of target macroscopic system
* 2.2 Isothermal lattice Boltzmann for incompressible flows
* 2.2.1 Discrete velocity system and discrete equilibrium state
* 2.2.2 Lattice Boltzmann equations
* 3 Lattice Boltzmann models for compressible reacting flows
* 3.1 Energy and species balance equations
* 3.1.1 Double distribution function lattice Boltzmann approach for thermal flows
* 3.1.2 Kinetic models for species balance equations
* 3.1.3 Passive-scalar lattice Boltzmann models
* 3.1.4 Hybrid models: Finite difference and finite volume solvers for energy and species
* 3.2 Compressible continuity and momentum balance equations
* 3.2.1 Lattices with higher-order quadratures
* 3.2.2 Standard lattice density-based solvers
* 3.2.3 Pressure-based solvers
* 3.2.4 Low Mach thermo-compressible pressure-based solver
## 1 Introduction
The lattice Boltzmann (LB) method, proposed in the early 80's has grown popular over the past decades [1; 2]. The rapid emergence of this numerical method is mainly due to the simplicity and strict locality of the involved time-evolution operators [3; 4]. The locality of the operators and intrinsic coupling between the pressure and velocity fields through the distribution function (as opposed to pressure-based incompressible or low Mach solvers) allows for better performances on parallel clusters and a much more efficient treatment of flows in complex geometries [4]. During the past decade, the LB method originally proposed for computational fluid dynamics (CFD) has been extended to many complex flow configurations ranging from non-Newtonian [5; 6; 7; 8; 9], to multi-phase [10; 11; 12; 13; 14; 15; 16], and multi-component flows. Although initially limited to low-Mach isothermal flows with an ideal gas equation of state, the LB approach was later modified to lift many of these restrictions. Releasing the restriction on thermo-compressibility is an essential step to develop LB solvers for many applications such as combustion.
The topic of combustion modeling with LB was first touched upon in 1997 in an article by Succi et al. [17]. Since then, and up until very recently, a limited number of publications had appeared on the topic, all limited to simplified 1-D and 2-D test-cases, see for instance [18; 19; 20; 21; 22; 23; 24]. The limited progress of the lattice Boltzmann method during that period might be attributed to a number of factors such as the absence of a good compressible realization, persistent issues with stability of solvers, and the absence of multi-species formulations. During the past years a considerable amount of research work has been conducted to extend the lattice Boltzmann method to compressible flows, which has led to a number of stable and efficient realizations, see for instance [25; 26; 27; 28]. In parallel, the stability domain of lattice Boltzmann solvers both for incompressible and compressible flows has been considerably expanded through more advanced collision models, see for instance [29; 30; 31; 32; 33; 34]. These two factors along with the development of models for species transport and the idea of hybrid solvers taking advantage of classical numerical methods for the species end energy balance equations led to considerable progress in combustion simulation with the lattice Boltzmann method in recent years. Contrary to the first wave of models, the more recent efforts have been extended and used for many complex configurations involving thermo-acoustics, complex geometries and turbulent flows. It has to be noted that in parallel with efforts to develop lattice Boltzmann-based models for combustion simulations, a number of attempts at developing discrete velocity Boltzmann-based models with Eulerian discretization in physical space have also been reported, see for instance [35; 36; 37; 38].
In the present contribution we will review developments in the area of lattice Boltzmann simulations of combustion. Different challenges, solutions and models developed in that area in the past years will be presented and discussed. The review starts with a brief overview of basic concepts, i.e. target macroscopic system and basic concepts from the lattice Boltzmann method. In the third section of this review we will discuss topics specific to combustion simulations, i.e. strategies to solve the energy balance equation, models developed for species transport equations, and introduction of compressibility effects into the lattice Boltzmann solver. The review closes with section four where key points are briefly listed and future prospects and challenges
are discussed.
## 2 Basic concepts
### Brief overview of target macroscopic system
Throughout the manuscript, the target set of macroscopic equations is the multi-component system of Navier-Stokes-Fourier equations (see, e.g. [39])
\[\frac{\partial\rho}{\partial t}+\frac{\partial\rho u_{\beta}}{ \partial x_{\beta}} =0, \tag{1}\] \[\frac{\partial\rho u_{\alpha}}{\partial t}+\frac{\partial\rho u_{ \alpha}u_{\beta}+p\delta_{\alpha\beta}}{\partial x_{\beta}} =\frac{\partial\tau_{\alpha\beta}}{\partial x_{\beta}},\] (2) \[\frac{\partial\rho E}{\partial t}+\frac{\partial\rho u_{\beta}(E +p/\rho)}{\partial x_{\beta}} =\frac{\partial\tau_{\alpha\beta}u_{\alpha}}{\partial x_{\beta}} -\frac{\partial q_{\beta}}{\partial x_{\beta}},\] (3) \[\frac{\partial\rho Y_{k}}{\partial t}+\frac{\partial\rho u_{ \beta}Y_{k}}{\partial x_{\beta}} =\frac{\partial\rho V_{k,\beta}Y_{k}}{\partial x_{\beta}}+\dot{ \omega}_{k}. \tag{4}\]
Here \(u_{\alpha}\) is the \(\alpha^{\text{th}}\) component of the fluid velocity, \(\rho\) is the mixture density, \(E\) is the total energy (sum of internal energy \(e\) and kinetic energy \(u_{\alpha}^{2}/2\)), \(Y_{k}\) is the mass fraction of species \(k\), and \(\delta_{\alpha\beta}\) is the Kronecker symbol (1 if \(\alpha=\beta\), 0 else). The above system is fully closed upon choosing
**Equation of state:**: a thermodynamic closure, linking state variables \(p\), \(\rho\), \(e\), \(T\) and \(Y_{k}\), e.g. following the perfect gas assumption \(p=\rho.\bar{r}.T=\rho\frac{\mathcal{R}}{W}T\).
**Transport models:**: to define the species diffusion velocities \(V_{k,\beta}\), heat flux term \(q_{\beta}\) and viscous stress tensor \(\tau_{\alpha\beta}\).
**Chemistry model:**: to define the reaction rates \(\dot{\omega}_{k}\).
### Isothermal lattice Boltzmann for incompressible flows
The construction of a discrete kinetic solver like the lattice Boltzmann method has two main ingredients: (a) Reduction of the particles' speed continuous space to a discrete set, and (b) discretization of the resulting system of hyperbolic equations in physical space and time. In this section these two components will be briefly reviewed.
#### 2.2.1 Discrete velocity system and discrete equilibrium state
The rationale behind the construction of the lattice Boltzmann method consists in using a truncated version of the Boltzmann equation with a linear approximation to the collision term to recover the dynamics of the macroscopic equations of interest, here the isothermal Navier-Stokes and continuity equations.
From Boltzmann-BGK to the discrete velocity Boltzmann equationsConsistent with the terminology of the early literature, in the context of the present work we will refer to all methods using a form of the Boltzmann equation with a discrete set of particles' velocities as discrete-velocity models (DVM). In recent years interest in such models has been revived in the form of numerical methods such as the lattice Boltzmann equation.
DVM generally aim at approximating the distribution function with quadrature rules or similar integral approximations and using a discrete set of velocities:
\[\mathcal{V}:=\{\mathbf{c}_{i}\in\mathbb{R}^{D}\}, \tag{5}\]
changing the Boltzmann-Bhatnagar-Gross-Krook (BGK) equation [40] into a set of coupled hyperbolic partial differential equations:
\[\frac{\partial f_{i}}{\partial t}+c_{i\alpha}\frac{\partial f_{i}}{\partial x_ {\alpha}}=\frac{1}{\tau}\left(f_{i}^{\text{eq}}-f_{i}\right). \tag{6}\]
Constraints on the discrete equilibrium function, \(f_{i}^{\text{eq}}\), e.g. moments of the equilibrium distribution function to be correctly recovered, are identified via a Chapman-Enskog (CE) multi-scale expansion. For the continuity equation:
\[\frac{\partial\Pi_{0}^{\text{eq}}}{\partial t}+\frac{\partial\Pi_{\alpha}^{ \text{eq}}}{\partial x_{\alpha}}=0, \tag{7}\]
while for the momentum balance equations:
\[\frac{\partial\Pi_{\alpha}^{\text{eq}}}{\partial t}+\frac{\partial\Pi_{ \alpha\beta}^{\text{eq}}}{\partial x_{\beta}}-\frac{\partial}{\partial x_{ \beta}}\tau\left[\frac{\partial\Pi_{\alpha\beta}^{\text{eq}}}{\partial t}+ \frac{\partial\Pi_{\alpha\beta\gamma}^{\text{eq}}}{\partial x_{\gamma}} \right]=0, \tag{8}\]
where we have made use of the following notation:
\[\Pi_{\alpha_{1},\ldots,\alpha_{n}}=\int\prod_{\alpha=\alpha_{1}}^{\alpha_{n}} v_{\alpha}f(\mathbf{v},\mathbf{x},t)d\mathbf{v}, \tag{9}\]
meaning that for the system of interest one needs to correctly recover moments of orders zero through three of the equilibrium distribution function.
In the specific context of the lattice Boltzmann method, the Gauss-Hermite quadrature is utilized to satisfy most of the above-listed conditions on the discrete distribution functions, i.e.
\[\int P^{M}\left(\mathbf{v},\rho,\mathbf{u}\right)w\left(\mathbf{v}\right)d\mathbf{v}\cong\sum _{i=0}^{Q-1}w_{i}P^{M}\left(\mathbf{c}_{i},\rho,\mathbf{u}\right), \tag{10}\]
where \(P^{M}\left(\mathbf{v},\rho,\mathbf{u}\right)\) is a polynomial of order \(M\) of \(\mathbf{v}\) and \(w(\mathbf{v})\) is a function of the form:
\[w\left(\mathbf{v}\right)=\left(2\pi\right)^{-D/2}\exp\left(-\frac{\mathbf{v}^{2}}{2} \right). \tag{11}\]
For the quadrature to be applicable the distribution function must be expanded as the product of a polynomial series and \(w(\mathbf{v})\) and the integration variable, i.e. \(\mathbf{v}\) normalized via the reference temperature, i.e. \(\bar{r}T_{0}\). A change of variable as \(\mathbf{v}^{\prime}=(\mathbf{v}-\mathbf{u})/\sqrt{\bar{r}T}\) like Grad's expansion is not possible here as it would lead to changing discrete particle velocities that are also not necessarily space-filling.
Choosing the abscissae, i.e. \(\mathbf{c}_{i}\) to be the roots of the Hermite polynomial of order \(Q\) and the weights as:
\[w_{i}=\frac{Q!}{\mathcal{H}_{Q-1}\left(\mathbf{c}_{i}\right)^{2}}, \tag{12}\]
results in the maximum algebraic degree of precision, i.e. \(2Q-1\). This means that the quadrature guarantees exact recovery of moments up to order \(\frac{2Q-1}{2}\).
In the case of the classical lattice Boltzmann stencil, a third-order quadrature is used, i.e. \(Q=3\) in 1-D, with:
\[c_{i}\in\{-\sqrt{3\bar{r}_{0}T_{0}},0,\sqrt{3\bar{r}_{0}T_{0}}\}, \tag{13}\]
and
\[w_{i}\in\{\frac{1}{6},\frac{2}{3},\frac{1}{6}\}. \tag{14}\]
The simplest multi-dimensional extension of this quadrature can be obtained by operating a tensorial product of the 1-D lattice. For instance, in 2-D as illustrated in Fig. 1, this leads to the D2Q9 lattice with:
\[c_{ix}/\sqrt{3\bar{r}T_{0}}\in\{0,1,0,-1,0,1,-1,-1,1\}, \tag{15}\]
and
\[c_{iy}/\sqrt{3\bar{r}T_{0}}\in\{0,0,1,0,-1,1,1,-1,-1\}, \tag{16}\]
and
\[w_{i}\in\{\frac{4}{9},\frac{1}{9},\frac{1}{9},\frac{1}{9},\frac{1}{9},\frac{1 }{36},\frac{1}{36},\frac{1}{36},\frac{1}{36}\}. \tag{17}\]
A similar procedure is used to obtain the D3Q27 lattice for 3-D simulations.
Discrete equilibrium: polynomial formA number of different ways for constructing the discrete equilibrium have been proposed over the years. One of the early approaches, first discussed in [41] was to re-write the equilibrium as:
\[f^{\text{eq}}=\rho\left(2\pi\bar{r}_{0}T_{0}\right)^{-D/2}\exp\left\{\frac{- \mathbf{v}^{2}}{2\bar{r}_{0}T_{0}}\right\}\exp\left\{\frac{-\mathbf{u}^{2}+\mathbf{u}\cdot \mathbf{v}}{2\bar{r}_{0}T_{0}}\right\}, \tag{18}\]
and Taylor-expand the last term around Mach Ma= 0, i.e.
\[\exp\left\{\frac{-\mathbf{u}^{2}+\mathbf{u}\cdot\mathbf{v}}{2\bar{r}_{0}T_{0}}\right\}=1 +\frac{\mathbf{v}\cdot\mathbf{u}}{\bar{r}_{0}T_{0}}+\frac{\left(\mathbf{v}\cdot\mathbf{u} \right)^{2}}{2\bar{r}_{0}^{2}T_{0}^{2}}-\frac{\mathbf{u}^{2}}{2\bar{r}_{0}T_{0}}+ \mathcal{O}\left(\left\|\mathbf{u}\right\|^{3}/\bar{r}_{0}{T_{0}}^{3/2}\right), \tag{19}\]
ultimately leading to, after discretization of particles velocity space and application of the Gauss-Hermite quadrature, the second-order polynomial discrete equilibrium:
\[f_{i}^{\text{eq}}=w_{i}\rho\left(1+\frac{\mathbf{c}_{i}\cdot\mathbf{u}}{c_{s}^{2}}+ \frac{\left(\mathbf{c}_{i}\cdot\mathbf{u}\right)^{2}}{2c_{s}^{4}}-\frac{\mathbf{u}^{2}}{2 c_{s}^{2}}\right). \tag{20}\]
Figure 1: Illustration of the tensorial product process to build D2Q9 lattice from D1Q3.
An alternative construction based on an expansion of the distribution function with Hermite polynomial was proposed, which led to the following final form:
\[f_{i}^{\rm eq}=w_{i}\rho\sum_{n=0}^{N}\frac{1}{n!{c_{s}}^{2n}}\mathcal{H}_{n}( \boldsymbol{c}_{i}):a_{n}^{\rm eq}(\rho,\boldsymbol{u}), \tag{21}\]
where \(\mathcal{H}_{n}\) and \(a_{n}^{\rm eq}\) are tensors of rank \(n\) representing respectively the order \(n\) Hermite polynomial and coefficient.
Alternatively the polynomial equilibrium can also be constructed via the product form. The product form of the equilibrium distribution function (EDF) is a special realization of the moments matching approach. Considering the standard discrete velocity set D3Q27, where D=3 stands for three dimensions and Q=27 is the number of discrete velocities,
\[\boldsymbol{c}_{i}=(c_{ix},c_{iy},c_{iz}),\ c_{i\alpha}\in\{-1,0,1\}, \tag{22}\]
one first defines a triplet of functions in two variables, \(\xi_{\alpha}\) and \(\zeta_{\alpha\alpha}\),
\[\Psi_{0}(\xi_{\alpha},\zeta_{\alpha\alpha}) = 1-\zeta_{\alpha\alpha}, \tag{23}\] \[\Psi_{1}(\xi_{\alpha},\zeta_{\alpha\alpha}) = \frac{\xi_{\alpha}+\zeta_{\alpha\alpha}}{2},\] (24) \[\Psi_{-1}(\xi_{\alpha},\zeta_{\alpha\alpha}) = \frac{-\xi_{\alpha}+\zeta_{\alpha\alpha}}{2}, \tag{25}\]
and considers a product-form associated with the discrete velocities \(\boldsymbol{c}_{i}\) (22),
\[\Psi_{i}=\Psi_{c_{ix}}(\xi_{x},\zeta_{xx})\Psi_{c_{iy}}(\xi_{y},\zeta_{yy}) \Psi_{c_{iz}}(\xi_{z},\zeta_{zz}). \tag{26}\]
All pertinent populations below are determined by specifying the parameters \(\xi_{\alpha}\) and \(\zeta_{\alpha\alpha}\) in the product-form (26). The two-dimensional version of the model on the D2Q9 lattice is obtained by omitting the \(z\)-component in all formulas. After matching moments with their continuous counter-parts the parameters are set as,
\[\xi_{\alpha}=u_{\alpha}, \tag{27}\] \[\zeta_{\alpha\alpha}=c_{s}^{2}+u_{\alpha}^{2}, \tag{28}\]
and the local equilibrium populations are represented with the product-form (26),
\[f_{i}^{\rm eq}=\rho\prod_{\alpha=x,y,z}\Psi_{c_{i\alpha}}\left(u_{\alpha},c_{ s}^{2}+u_{\alpha}^{2}\right). \tag{29}\]
This form of the discrete equilibrium populations, when \(c_{s}^{2}=\bar{r}_{0}T_{0}/3\) is equivalent to third-order quadrature-based scheme with a full expansion of the distribution function.
Alternative to polynomial equilibria: Entropic equilibriaAs an alternative to the classical discrete equilibrium construction approach where all degrees of freedom are used to fulfill moments constraints, the entropic approach adds minimization of an entropy functional to the list of constraints, changing the equi
librium construction problem into a constraint minimization problem. While a number of different discrete entropy functions have been proposed in the literature the most commonly used one is:
\[H_{w_{i},c_{i}}=\sum_{i=1}^{Q}f_{i}\ln\left(\frac{f_{i}}{w_{i}}\right). \tag{30}\]
Minimization of this functional under constraints on moments of order zero and one, leads to the following well-known entropic equilibrium:
\[f_{i}^{\rm eq}=w_{i}\rho\prod_{\alpha=x,y}\left(2-\sqrt{{u_{\alpha}}^{2}/c_{s} ^{2}+1}\right)\left(\frac{2u_{\alpha}+\sqrt{{u_{\alpha}}^{2}/c_{s}^{2}+1}}{1-u_ {\alpha}}\right)^{c_{i,\alpha}}. \tag{31}\]
One of the most interesting feature of this equilibrium, contrary to polynomial equilibria is that, as demonstrated in [42, 43], it guarantees unconditional linear stability.
#### 2.2.2 Lattice Boltzmann equations
_From discrete velocity Boltzmann to the lattice Boltzmann method._ To go to the final form of the lattice Boltzmann equations, two main ingredients are to be used: (a) integration of the discrete velocity Boltzmann equation along their _constant_ characteristics and (b) a re-definition of the discrete distribution functions. The former step results in:
\[f_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right)-f_{i}\left(\mathbf{x},t \right)=\int_{t}^{t+\delta t}\Omega_{i}\left(\mathbf{x}(t^{\prime}),t^{\prime} \right)dt^{\prime}, \tag{32}\]
where the term on the right-hand side, representing collision, has to be approximated via the trapezoidal rule, in order to keep the scheme second-order accurate:
\[\int_{t}^{t+\delta t}\Omega_{i}\left(\mathbf{x}(t^{{}^{\prime}}),t^{{}^{\prime}} \right)dt^{{}^{\prime}}=\frac{\delta t}{2}\Omega_{i}\left(\mathbf{x},t\right)\,+ \frac{\delta t}{2}\Omega_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right) +\mathcal{O}\left(\delta t^{3}\right). \tag{33}\]
However, as observed here, application of the trapezoidal rule would make the scheme implicit and therefore not attractive regarding efficiency. The second ingredient, i.e. redefinition of the discrete distribution function as:
\[\bar{f}_{i}=f_{i}-\frac{\delta t}{2}\Omega_{i}, \tag{34}\]
changes the system of equations into a fully explicit scheme:
\[\bar{f}_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right)-\bar{f}_{i}\left( \mathbf{x},t\right)=\frac{\delta t}{\bar{\tau}}\left(f_{\alpha}^{\rm eq}\left( \mathbf{x},t\right)-\bar{f}_{i}\left(\mathbf{x},t\right)\right), \tag{35}\]
where \(\bar{\tau}\) is now defined as:
\[\bar{\tau}=\tau+\delta t/2. \tag{36}\]
_The lattice Boltzmann method for incompressible flows._ A multi-scale analysis of the above-listed system of equations shows that it recovers the _isothermal_ continuity plus Navier-Stokes system of equations for an
ideal equation of state at reference temperature:
\[\bar{r}_{0}T_{0}=\frac{\delta x^{2}}{3\delta t^{2}}, \tag{37}\]
where the coefficient \(1/3\) is specific to the third-order quadrature. Note that the recovered system of macroscopic equations further admits a defect in the viscous stress tensor; For the classical second-order polynomial expansion defects are present both in shear and bulk viscosity scaling in both cases as \(\propto\mathcal{U}_{\alpha}^{2}\delta t^{2}/\delta x^{2}\), where \(\mathcal{U}\) is the characteristic velocity along the \(\alpha\)-axis. For the full polynomials expansion or product-form equilibria, this defect is only present in the bulk viscosity.
This means that under acoustic scaling, i.e. \(\delta x/\delta t=\text{const}\) the solver converges to the _compressible isothermal Navier-Stokes_ equations, as for a fixed characteristic velocity \(\mathcal{U}\) the Mach number remains constant in the limit of \(\delta t\to 0\). Furthermore, under acoustic scaling the defects in the effective viscosities do not vanish. Under diffusive scaling on the other hand, i.e. \(\delta x^{2}/\delta t=\text{constant}\), the solver converges to the _incompressible Navier-Stokes_ equations, as \(\mathcal{U}/c_{s}\to 0\) in the limit of \(\delta t\to 0\) and the defect in the effective viscosities also goes to zero.
## 3 Lattice Boltzmann models for compressible reacting flows
We have now introduced the main ingredients of classical Lattice-Boltzmann methods, and shown that they allow to recover the continuity (1) and momentum (2) equations of a weakly compressible gas at constant temperature \(T_{0}\). This is indeed not sufficient for combustion applications, where energy (3) and species (4) are absolutely required.
This Section is divided into 3 main subsections. First, we shall explore different alternatives for the resolution of the additional equations (energy and species) in Section 3.1. Second, we will detail the required changes to the lattice Boltzmann formulation in Section 3.2. Finally, we will list the reactive flow configurations successfully simulated by LBM solvers and discuss their performance in Section 3.3.
### Energy and species balance equations
#### 3.1.1 Double distribution function lattice Boltzmann approach for thermal flows
Kinetic modelsHistorically, the starting point of double distribution function (DDF) approaches is rooted in the need for simulations with variable Prandtl numbers and specific heat capacities, as alternatives to Holway's ellipsoidal statistics [44] or Shakhov's model [45]. This is usually achieved by introducing a second distribution function \(g\) to carry a form of energy following [46], which is not uniquely defined. The earliest occurrence of a double distribution function approach is documented in [47] where the authors introduced:
\[g(\mathbf{v},\mathbf{x},t)=\frac{\left(v_{\alpha}-u_{\alpha}\right)^{2}}{2}f(\mathbf{v}, \mathbf{x},t). \tag{38}\]
Multiplying the Boltzmann equation, i.e. the balance law for \(f\) by the coefficients in the definition of the \(g\)-distribution function one obtains the balance law for the latter:
\[\frac{\partial g}{\partial t}+v_{\alpha}\frac{\partial g}{\partial x_{\alpha} }=\frac{1}{\tau_{g}}\left(g^{\text{eq}}-g\right)+fq, \tag{39}\]
where the additional non-homogeneous contribution \(q\) is:
\[q=\left(u_{\alpha}-v_{\alpha}\right)\left[\partial_{t}u_{\alpha}+v_{\beta}\frac{ \partial u_{\alpha}}{\partial x_{\beta}}\right]. \tag{40}\]
In this model, the total energy \(E\) is computed as:
\[\rho E=\int_{\mathbf{v}}\left[\frac{\mathbf{u}^{2}}{2}f(\mathbf{v},\mathbf{x},t)+g(\mathbf{v},\mathbf{ x},t)\right]d\mathbf{v}. \tag{41}\]
Some comments on this approach are necessary:
* This model, through the choice of parameter \(\tau_{g}\) allows for a variable Prandtl number Pr.
* The model assumes a mono-atomic molecule as no degrees of freedom in addition to translational are taken into account.
* The model involves space and time derivatives of macroscopic fields.
To alleviate the last issue, Guo et al. proposed to carry total energy with \(g\) instead [48]:
\[g(\mathbf{v},\mathbf{x},t)=\frac{v_{\alpha}^{2}}{2}f(\mathbf{v},\mathbf{x},t). \tag{42}\]
While this choice of a second distribution function leads to a much simpler balance law for \(g\) it also comes with a limitation of the Prandtl number. Contrary to the previous choice of \(g\) carrying internal energy where one could easily vary Pr by changing \(\tau_{g}\), here the relaxation time in the collision operator controls both relaxation of internal and kinetic energy, therefore also affecting viscous heating. To allow for variable Pr, the authors proposed to decompose the collision term into kinetic and internal contributions, leading to the following balance law:
\[\frac{\partial g}{\partial t}+v_{\alpha}\frac{\partial g}{\partial x_{\alpha }}=\frac{1}{\tau_{g}}\left(g^{\rm eq}-g\right)+\frac{Z}{\tau_{gf}}\left(f^{ \rm eq}-f\right), \tag{43}\]
where
\[Z=\frac{v_{\alpha}^{2}}{2}-\frac{\left(v_{\alpha}-u_{\alpha}\right)^{2}}{2}, \tag{44}\]
and
\[\frac{1}{\tau_{gf}}=\frac{1}{\tau_{g}}-\frac{1}{\tau}. \tag{45}\]
Since \(g\) carries the total energy it is computed solely as its zeroth-order moment:
\[\rho E=\int_{\mathbf{v}}g(\mathbf{v},\mathbf{x},t)d\mathbf{v}. \tag{46}\]
In the same contribution the authors proposed a more generalized framework allowing to incorporate additional non-translational degrees of freedom into the model by defining \(g\) as:
\[g(\mathbf{v},\mathbf{x},t)=\frac{v_{\alpha}^{2}+\eta_{\beta}^{2}}{2}f(\mathbf{v},\mathbf{x},t), \tag{47}\]
where \(\mathbf{\eta}\) is a vector with \(\delta\) component, with \(\delta\) the number of additional degrees of freedom, and summation over both \(\alpha\) and \(\beta\) is assumed. In this model the equilibrium distribution function is:
\[f^{\rm eq}(\mathbf{v},\mathbf{\eta},\mathbf{x},t)=\rho(2\pi rT)^{(\delta+D)/2}\exp\{-\frac{ \left(\mathbf{v}-\mathbf{u}\right)^{2}+\mathbf{\eta}^{2}}{2rT}\}, \tag{48}\]
and the total energy is computed as:
\[\rho E=\int_{\mathbf{v}}\int_{\mathbf{\eta}}g(\mathbf{v},\mathbf{\eta},\mathbf{x},t)d\mathbf{\eta}d\bm {v}. \tag{49}\]
While Guo et al. originally proposed this decoupling for the low Mach limit, it was extended to compressible regimes in [49] where the authors used a thirteen-velocity lattice for the \(f\) distribution function to eliminate the deviations in the third-order moments of the equilibrium distribution function. A realization of this model on the standard third-order quadrature-based lattice was proposed in [50]. The approach originally proposed in [48] has been routinely used since then in a wide number of publications for both compressible and incompressible flows, see for instance [51, 52, 53, 25]. Another realization of the double distribution function method relying on internal energy was also proposed in [54]. As noted by the authors, re-writing the balance equation of Eq. (43) as:
\[\frac{\partial g}{\partial t}+v_{\alpha}\frac{\partial g}{\partial x_{\alpha }}=\frac{1}{\tau}\left(g^{\rm eq}-g\right)+\frac{1}{\tau_{gf}}\left(g^{*}-g \right), \tag{50}\]
in the case of the model in [48]:
\[g^{*}=g^{\rm eq}+Z\left(f-f^{\rm eq}\right), \tag{51}\]
while for [54]:
\[g^{*}=g^{\rm eq}+2v_{\alpha}u_{\beta}\left(\Pi_{\alpha\beta}-\Pi_{\alpha\beta }^{\rm eq}\right). \tag{52}\]
Note that both realizations lead to the same hydrodynamic equation and in the case of the third-order quadrature-based lattices, even to the same discrete equations [54]. This realization has also been used for a variety of compressible flow simulations, see for instance [27, 55].
Lattice Boltzmann equationsDiscretization in the space of particles velocities \(\mathbf{v}\) proceeds very similarly to that of the probability distribution function \(f\), either through projection onto the space of Hermite polynomials or via the product form construction. In the product form approach discussed in [27, 54] similar to Eq. (29):
\[g_{i}^{\rm eq}=\rho\prod_{\alpha=x,y,z}\Psi_{c_{i\alpha}}\left(\mathcal{O}_{ \alpha},\mathcal{O}_{\alpha}^{2}\right)E, \tag{53}\]
where the operator \(\mathcal{O}_{\alpha}\) acts on any smooth function \(A(\bar{r}T,u_{\alpha})\) as:
\[\mathcal{O}_{\alpha}A=\bar{r}T\frac{\partial A}{\partial u_{\alpha}}+u_{ \alpha}A. \tag{54}\]
The discrete-in-particles speed-space can then be integrated along characteristics to obtain the corresponding collision-streaming equations:
\[\bar{g}_{i}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-\bar{g}_{i}(\mathbf{x},t)=\frac{ \delta t}{\bar{\tau}}\left(g_{i}^{\rm eq}(\mathbf{x},t)-\bar{g}_{i}(\mathbf{x},t)\right) +\frac{\delta t}{\bar{\tau}_{gf}}\left(g_{i}^{*}(\mathbf{x},t)-\bar{g}_{i}(\mathbf{x},t )\right), \tag{55}\]
where the new distribution function is:
\[\bar{g}_{i}=g_{i}-\frac{\delta t}{2}\left[\frac{1}{\tau}\left(g_{i}^{\rm eq}-g _{i}\right)+\frac{1}{\tau_{gf}}\left(g_{i}^{*}-g_{i}\right)\right] \tag{56}\]
and
\[\frac{\delta t}{\bar{\tau}_{gf}}=\frac{\delta t}{\bar{\tau}_{g}}-\frac{\delta t }{\bar{\tau}}, \tag{57}\]
with
\[\frac{\bar{\tau}_{g}}{\delta t}=\frac{\lambda}{pC_{v}}+\frac{1}{2}. \tag{58}\]
The total energy can then be obtained by summing discrete distribution functions:
\[\rho E=\sum_{i=0}^{Q-1}g_{i}. \tag{59}\]
A multi-scale analysis shows that the models above, at the Euler level, recover:
\[\frac{\partial\rho E}{\partial t^{(1)}}+\frac{\partial\rho u_{\alpha}H}{ \partial x_{\alpha}}+\frac{\partial\rho u_{\alpha}u_{\beta}^{2}/2}{\partial x _{\alpha}}=0, \tag{60}\]
where \(H\) is the enthalpy. At the Navier-Stokes level:
\[\frac{\partial\rho E}{\partial t^{(2)}}+\frac{\partial\Pi_{\alpha}(g_{i}^{(1) })}{\partial x_{\alpha}}=0, \tag{61}\]
with:
\[\Pi_{\alpha}(g_{i}^{(1)})=-\left(\frac{\bar{\tau}_{g}}{\delta t}-\frac{1}{2} \right)p\frac{\partial H}{\partial x_{\alpha}}+u_{\beta}\Pi_{\alpha\beta}(f_{ i}^{(1)}), \tag{62}\]
where the first term is the Fourier diffusive flux while the second term is viscous heating. The double distribution function approach in combination with a proper lattice Boltzmann solver for density and momentum balance, detailed in next sections, has been used to model trans- and supersonic flows, for instance in [27]. A few results are illustrated in Fig. 2.
Multi-species flowsFor the extension of this model to the case of multi-species flows in the context of a mixture-averaged formulation a few points must be noted:
* In all models discussed here, at the \(\epsilon^{2}\) order a Fourier flux of this form is recovered, see Eq. (62): \[-\left(\frac{\bar{\tau}_{g}}{\delta t}-\frac{1}{2}\right)p\frac{\partial H}{ \partial x_{\alpha}}=-\lambda\frac{\partial T}{\partial x_{\alpha}},\] (63) which holds if enthalpy is only function of temperature. For a mixture-averaged formulation with
multiple species, \(H=H(T,Y_{k})\), which would lead to: \[-\left(\frac{\overline{\tau}_{g}}{\delta t}-\frac{1}{2}\right)p\frac{\partial H }{\partial x_{\alpha}}=-\lambda\frac{\partial T}{\partial x_{\alpha}}-\left( \frac{\overline{\tau}_{g}}{\delta t}-\frac{1}{2}\right)p\sum_{k=0}^{N_{sp}-1}H _{k}\frac{\partial Y_{k}}{\partial x_{\alpha}},\] (64) where \(H_{k}\) is the enthalpy of the \(k^{\rm th}\) species.
* In multi-species flows, diffusive mass flux leads to a net transport of enthalpy which is absent in single-component flows.
A solution to both these shortcomings was proposed in [56] to recover consistent hydrodynamics, where the pseudo-equilibrium \(g_{i}^{*}\) is amended with two additional terms, one neutralizing the error in the Fourier diffusion and one introducing enthalpy flux through mass diffusion:
\[g_{i}^{*}=g_{i}^{\rm eq}+\frac{2w_{i}}{c_{s}^{2}}c_{i\alpha}\left[u_{\beta} \left(\Pi_{\alpha\beta}-\Pi_{\alpha\beta}^{\rm eq}\right)\ +p\sum_{k=0}^{N_{sp}-1}H_{k}\frac{\partial Y_{k}}{ \partial x_{\alpha}}+\rho\sum_{k=0}^{N_{sp}-1}Y_{k}H_{k}V_{k\alpha}\right]. \tag{65}\]
#### 3.1.2 Kinetic models for species balance equations
Over the past decades and starting in the early 2000's [57; 58] various attempt at developing lattice Boltzmann-based models for mixtures have been documented, see for instance [59; 60; 61; 62; 63]. Some of these models are reviewed in this section.
Thermal mixture-averaged model of Kang et al.In [64], the authors proposed a multi-component thermal model for catalytic systems. The model is an extension on previous work documented in [65; 66; 67; 68]. It consists of \(N_{sp}\) sets of lattice Boltzmann solvers, i.e. one per species:
\[g_{ki}(\mathbf{x}+\mathbf{c}_{ki}\delta t,t+\delta t)-g_{ki}(\mathbf{x},t)=\frac{\delta t} {\overline{\tau}_{k1}}\left(g_{ki}^{*}(\rho_{k},\mathbf{u}_{k})-g_{ki}(\mathbf{x},t) \right)+\frac{\delta t}{\overline{\tau}_{k2}}\left(g_{ki}^{\rm eq}(\rho_{k}, \mathbf{u})-g_{ki}(\mathbf{x},t)\right)+\psi_{ki}. \tag{66}\]
Figure 2: Illustration of applications of the model of Eq. (55). (Left) Sound pressure field for shock–vortex interaction with advection Mach number of 1.2 and vortex Mach number set to 0.25 at \(t^{*}=6\). (Right) Iso-surface of velocity divergence colored by local Mach number for compressible decaying turbulence at Ma\(=0.5\) and \(t^{*}=0.4\). Images are reproduced from [27].
The first point to note in this model is that post-streaming discrete distribution functions migrate to \(\mathbf{x}+\mathbf{c}_{ki}\delta t\) meaning each species' lattice have different discrete velocity sizes. As discussed in previous sections, in lattice Boltzmann time-step, grid-size and reference temperature are tied through:
\[\frac{\delta x^{2}}{\delta t^{2}}=\frac{\mathcal{R}T_{0}}{W}, \tag{67}\]
which in the context of this model where \(W=W_{k}\) is different for each species, and assuming that the time-step size is the same for all solvers, would mean:
\[\|c_{kia}\|=\frac{\delta x_{k}}{\delta t}=\sqrt{\frac{\mathcal{R}T_{0}}{W_{k}}}, \tag{68}\]
i.e. not all species will propagate on lattice. To overcome this issue, and following [69] the authors proposed to set the grid-size to that needed for the lightest species in the system, and for other species to use interpolation in order to reconstruct distribution functions on the grid. The equilibrium, \(g_{ki}^{\text{eq}}(\rho_{k},\mathbf{u})\) follows the product-form equilibrium of Eq. (29) with a few differences, namely:
\[\xi_{k\alpha}=u_{\alpha}\sqrt{W_{k}}, \tag{69}\] \[\zeta_{k\alpha\alpha}=T+W_{k}u_{\alpha}^{2}, \tag{70}\]
while for the pseudo-equilibrium \(g_{ki}^{*}(\rho_{k},\mathbf{u}_{k})\):
\[\xi_{k\alpha}=u_{k\alpha}\sqrt{W_{k}}, \tag{71}\] \[\zeta_{k\alpha\alpha}=T+W_{k}u_{k\alpha}^{2}. \tag{72}\]
In this model the second relaxation time, \(\bar{\tau}_{k2}\) sets the diffusivity to:
\[\frac{\bar{\tau}_{k2}}{\delta t}=\frac{\rho_{k}D_{k}}{p_{k}}+\frac{1}{2}, \tag{73}\]
where \(D_{k}\) is the mixture-average diffusion coefficient. The viscosity is set through the first relaxation time and using Wilke's formula:
\[\frac{\bar{\tau}_{k1}}{\delta t}=\frac{\mu_{k}}{p\sum_{k^{\prime}=0}^{N_{sp}-1 }X_{k^{\prime}}\phi_{kk^{\prime}}}+\frac{1}{2}, \tag{74}\]
with
\[\phi_{kk^{\prime}}=\frac{1}{\sqrt{8}}\frac{1}{\sqrt{1+\frac{W_{k}}{W_{k^{ \prime}}}}}\Bigg{[}1+\sqrt{\frac{\mu_{k}}{\mu_{k^{\prime}}}}\bigg{(}\frac{W_{ k^{\prime}}}{W_{k}}\bigg{)}^{1/4}\Bigg{]}^{2}. \tag{75}\]
The term \(\psi_{ki}\) in Eq. (66) is a correction term accounting for: (a) a correction velocity ensuring that the global diffusive mass flux is null, and (b) corrections for equilibrium moments of order three and four not recovered by the first-neighbour lattice. The latter terms allow the scheme to recover the proper viscous
stress tensor and non-equilibrium heat flux. In this model the macroscopic properties are computed as:
\[\rho_{k}=\sum_{i=0}^{Q-1}g_{ki}, \tag{76}\] \[\rho_{k}u_{k\alpha}=\sum_{i=0}^{Q-1}c_{ki\alpha}g_{ki},\] (77) \[\rho_{k}E_{k}=\sum_{i=0}^{Q-1}c_{ki\alpha}^{2}g_{ki}. \tag{78}\]
A multi-scale expansion of the model shows that it recovers the mixture-averaged multi-species equations and the Hirschfelder-Curtiss approximation with the mass corrector.
_Force-based approach of Vienne et al._ In [70], following the kinetic model of [71], Vienne et al. proposed a lattice Boltzmann model for isothermal multi-species mixtures recovering the Maxwell-Stefan system of equations. Considering a mixture made up of \(N_{sp}\) individual species, they proposed a coupled system of \(N_{sp}\) lattice Boltzmann solvers:
\[g_{ki}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-g_{ki}(\mathbf{x},t)=\frac{\delta t}{ \bar{\tau}_{k}}\left(g_{ki}^{\text{eq}}(\rho_{k},\mathbf{u}_{k})-g_{ki}(\mathbf{x},t) \right)+\mathcal{S}_{i}, \tag{79}\]
where \(\mathcal{S}_{i}\) is here to introduce external body forces, realized using Guo's approach [72]:
\[\mathcal{S}_{i}=\left(1-\frac{\delta t}{2\bar{\tau}_{k}}\right)w_{i}\left( \frac{c_{i\alpha}-u_{k\alpha}}{c_{s}^{2}}+\frac{(c_{i\beta}u_{k\beta})c_{i \alpha}}{c_{s}^{4}}\right)F_{k\alpha}, \tag{80}\]
where \(\mathbf{F}_{k}\) represents the body force. In this model
\[\rho_{k}=\sum_{i=0}^{Q-1}g_{ki}, \tag{81}\]
and
\[\rho_{k}u_{k\alpha}=\sum_{i=0}^{Q-1}c_{i\alpha}g_{ki}+F_{k\alpha}. \tag{82}\]
The interaction between different species driving diffusion is introduced via a body force defined as:
\[F_{k\alpha}=-p\sum_{k^{\prime}=0}^{N_{sp}-1}\frac{X_{k}X_{k^{\prime}}}{ \mathcal{D}_{kk^{\prime}}}(u_{k\alpha}-u_{k^{\prime}\alpha}), \tag{83}\]
where \(\mathcal{D}_{kk^{\prime}}\) represents the binary diffusion coefficients. As noted by the authors, the circular inter-dependence between the force of Eq. (83) and the momenta of individual species of Eq. (82) make the scheme implicit. A multi-scale analysis shows that this model recovers the following multi-component isothermal mass
\[\frac{\partial\rho_{k}}{\partial t}+\frac{\partial\rho_{k}u_{k\alpha}}{ \partial x_{\alpha}}=0, \tag{84}\]
and momentum balance equations
\[\frac{\partial\rho_{k}u_{k\alpha}}{\partial t}+\frac{\partial\rho_{k}u_{k\alpha} u_{k\beta}}{\partial x_{\beta}}+\frac{\partial p_{k}}{\partial x_{\alpha}}-\frac{ \partial}{\partial x_{\beta}}\left(\mu_{k}\frac{\partial u_{k\beta}}{\partial x _{\alpha}}+\mu_{k}\frac{\partial u_{k\alpha}}{\partial x_{\beta}}\right)+p \sum_{k^{\prime}=0}^{N_{sp}-1}\frac{X_{k}X_{k^{\prime}}}{\mathcal{D}_{kk^{ \prime}}}(u_{k\alpha}-u_{k^{\prime}\alpha})=0, \tag{85}\]
where the bulk viscosities \(\mu_{k}\) are defined as:
\[\frac{\bar{\tau}_{k}}{\delta t}=\frac{\mu_{k}}{\rho_{k}c_{s}^{2}}+\frac{1}{2}. \tag{86}\]
The model has been successfully used to study miscible multi-component flow behaviors such as viscous fingering, see Fig. 3. An extension of this model to thermal and reacting cases is yet to done.
Model of Sawant et al.In [56; 74], the authors proposed a kinetic model to recover the Stefan-Maxwell diffusion model. Each component is described by a set of populations \(g_{ki}\). The discrete-velocity time evolution equation is,
\[\frac{\partial g_{ki}}{\partial t}+c_{i\alpha}\frac{\partial g_{ki}}{\partial x _{\alpha}}=\sum_{k^{\prime}\neq k}\frac{1}{\theta_{kk^{\prime}}}\left[\left( \frac{g_{ki}^{\text{eq}}-g_{ki}}{\rho_{k}}\right)-\left(\frac{f_{k^{\prime}i}^ {\text{eq}}-f_{k^{\prime}i}^{*}}{\rho_{k^{\prime}}}\right)\right]. \tag{87}\]
The species densities are computed as zeroth-order moment of the discrete distribution functions:
\[\rho_{k}=\sum_{i=0}^{Q-1}g_{ki}. \tag{88}\]
The symmetric set of relaxation times \(\theta_{kk^{\prime}}=\theta_{k^{\prime}k}\) is related to the binary diffusion coefficients. The first-order moments of the distribution functions are,
\[\rho_{k}u_{k\alpha}=\sum_{i=0}^{Q-1}g_{ki}c_{i\alpha}. \tag{89}\]
Figure 3: Illustration of application of multi-species model of [70]. Evolution of viscous fingering instability for a system with two species. Image reproduced from [73].
The quasi-equilibrium populations \(g^{*}_{ki}\) satisfy the following constraints on moments,
\[\sum_{i=0}^{Q-1}g^{*}_{ki} =\rho_{k}, \tag{90}\] \[\sum_{i=0}^{Q-1}g^{*}_{ki}c_{i\alpha} =\rho_{k}u_{k\alpha}. \tag{91}\]
The momenta of the individual species sum up to the mixture momentum,
\[\sum_{k=0}^{N_{sp}-1}\rho_{k}u_{k\alpha}=\rho u_{\alpha}. \tag{92}\]
The equilibrium populations \(g^{\rm eq}_{ki}\) are subject to the following constraints:
\[\sum_{i=0}^{Q-1}g^{\rm eq}_{ki} =\rho_{k}, \tag{93}\] \[\sum_{i=0}^{Q-1}g^{\rm eq}_{ki}c_{i\alpha} =\rho_{k}u_{\alpha},\] (94) \[\sum_{i=0}^{Q-1}g^{\rm eq}_{ki}c_{i\alpha}c_{i\beta} =p_{k}\delta_{\alpha\beta}+\rho_{k}u_{\alpha}u_{\beta}. \tag{95}\]
In Eq. (95), the partial pressure \(p_{k}\) depends on the mixture temperature \(T\) which is obtained from the energy balance lattice Boltzmann solver. Noting that
\[\theta_{kk^{\prime}}=\frac{\mathcal{D}_{kk^{\prime}}}{pX_{k}X_{k^{\prime}}}, \tag{96}\]
and using the equation of state the kinetic model can be re-written as:
\[\frac{\partial g_{ki}}{\partial t}+c_{i\alpha}\frac{\partial g_{ki}}{ \partial x_{\alpha}}=\,\sum_{k^{\prime}\neq k}\left(\frac{\bar{W}\mathcal{R}T }{W_{k}W_{k^{\prime}}\mathcal{D}_{kk^{\prime}}}\right)\left[Y_{k^{\prime}} \left(g^{\rm eq}_{ki}-g_{ki}\right)-Y_{k}\left(g^{\rm eq}_{k^{\prime}i}-g^{*}_ {k^{\prime}i}\right)\right]. \tag{97}\]
This equation, for the sake of convenience, is recast in the form of a relaxation equation:
\[\frac{\partial g_{ki}}{\partial t}+c_{i\alpha}\frac{\partial g_{ki}}{\partial x _{\alpha}}=\frac{1}{\tau_{k}}\left(g^{meq}_{ki}-g_{ki}\right)-F_{ki}, \tag{98}\]
where
\[\frac{1}{\tau_{k}}=\sum_{k^{\prime}\neq k}\frac{Y_{k^{\prime}}}{\tau_{kk^{ \prime}}}=r_{k}T\left(\sum_{k^{\prime}\neq k}\frac{X_{k^{\prime}}}{\mathcal{D }_{kk^{\prime}}}\right), \tag{99}\]
and
\[F_{ki}=Y_{k}\sum_{k^{\prime}\neq k}\frac{1}{\tau_{kk^{\prime}}}\left(g^{\rm eq }_{k^{\prime}i}-g^{*}_{k^{\prime}i}\right). \tag{100}\]
This form of the equation can then be integrated along characteristics to obtain the lattice Boltzmann equation. The model recovers the compressible mixture-averaged multi-species equation with the Maxwell-Stefan velocity for species diffusion. It has been successfully used for a variety of cases involving combustion applications with detailed chemistry, as illustrated in Fig. 4.
#### 3.1.3 Passive-scalar lattice Boltzmann models
So-called passive-scalar lattice Boltzmann solvers, are models where only conservation of the zeroth-order moment of the distribution function is ensured by the collision operator. In such models, to solve an advection-diffusion-reaction partial differential equation for a field \(\Psi\), a distribution function \(g_{i}\) is defined such that:
\[\sum_{i=0}^{Q-1}g_{i}+\frac{\delta t}{2}S=\Psi, \tag{101}\]
where \(S\) is the source term. A classical non-homogeneous collision-streaming equation of the form:
\[g_{i}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-g_{i}(\mathbf{x},t)=-\frac{\delta t}{ \bar{\tau}}\left[g_{i}^{\rm eq}(\Psi,\mathbf{u})-g_{i}(\mathbf{x},t)\right]+(1-\frac{ \delta t}{2\bar{\tau}})\mathcal{S}_{i}, \tag{102}\]
with
\[\sum_{i=0}^{Q-1}\mathcal{S}_{i}=S, \tag{103}\]
and
\[g_{i}^{\rm eq}(\Psi,\mathbf{u})=\frac{\Psi f_{i}^{\rm eq}(\Psi,\mathbf{u})}{\rho}, \tag{104}\]
leads to a macroscopic equation of the form:
\[\frac{\partial\Psi}{\partial t}+\frac{\partial\Psi u_{\alpha}}{\partial x_{ \alpha}}+\frac{\partial}{\partial x_{\alpha}}\left(\frac{1}{2}-\frac{\bar{ \tau}}{\delta t}\right)\left(\frac{\partial\Psi u_{\alpha}}{\partial t}+\frac {\partial\Pi_{\alpha\alpha}(g_{i}^{\rm eq})}{\partial x_{\alpha}}\right)=S. \tag{105}\]
Note that in the literature, Eq. (104) has been used both with first and second-order polynomials expansions. Depending on the choice of the order of expansion the diffusion term will admit errors of different forms.
Figure 4: Illustration of applications of multi-species model of [56; 74]. (Left) Upper asymmetric hydrogen flame in micro-channel. (Right) Thermo-diffusive instability in radially expanding hydrogen flame. Images reproduced from [74; 75].
For instance, for a linear equilibrium,
\[\frac{\partial\Psi u_{\alpha}}{\partial t}+\frac{\partial\Pi_{\alpha\alpha}(g_{i} ^{\rm eq})}{\partial x_{\alpha}}=c_{s}^{2}\frac{\partial\Psi}{\partial x_{ \alpha}}+\frac{\partial\Psi u_{\alpha}}{\partial t}. \tag{106}\]
On that note, let us now discuss the passive scalar approach in the specific context of the species mass and energy balance equations. Although a number of different works have presented modified passive scalar approaches for non-linear dependence of the diffusion driving force on the zeroth-order moment even in the context of multi-species flows (see for instance [76, 77]), here we will limit our discussion to models that have been used for combustion simulation.
Energy balance equationThe energy balance equation can be written in a variety of different ways, see [39]. Here we will only discuss the form involving temperature:
\[\frac{\partial T}{\partial t}+u_{\alpha}\frac{\partial T}{\partial x_{\alpha }}-\frac{1}{\rho\bar{c}_{p}}\frac{\partial}{\partial x_{\alpha}}\left(\lambda \frac{\partial T}{\partial x_{\alpha}}\right)+\frac{\dot{\omega}_{T}}{\rho \bar{c}_{p}}=0. \tag{107}\]
The classical approach to recover this balance equation is to set \(\Psi=T\) which would lead to the following shortcomings:
* An error of the form \(T\partial u_{\alpha}/\partial x_{\alpha}\) in the convection term.
* An error of the form: \[-\frac{\lambda}{\rho\bar{c}_{p}}\frac{\partial T}{\partial x_{\alpha}}\frac {\partial\rho\bar{c}_{p}}{\partial x_{\alpha}},\] in the Fourier diffusion term.
* The enthalpy flux due to species mass diffusion is missing.
While one can overcome these issues via alternative forms of the equilibrium distribution function (see for instance [78]), the simplest way to circumvent these issues is to define the source term \(S\) in Eq. (105) as:
\[S=-\frac{\dot{\omega}_{T}}{\rho\bar{c}_{p}}+T\frac{\partial u_{\alpha}}{ \partial x_{\alpha}}-\frac{\lambda}{\rho\bar{c}_{p}}\frac{\partial T}{ \partial x_{\alpha}}\frac{\partial\rho\bar{c}_{p}}{\partial x_{\alpha}}-\sum_{ k=0}^{N_{x_{p}}-1}\frac{c_{pk}}{\bar{c}_{p}}Y_{k}V_{k\alpha}\frac{\partial T}{ \partial x_{\alpha}}. \tag{108}\]
Such an approach, among others, has been used in [79, 80]. Note that in [79, 80] the last term was not considered. A similar approach can be undertaken for the case were the enthalpy or energy balance equation is targeted.
Species mass balance equationsFor the sake of simplicity let us assume that the species mass fraction balance equation is targeted. Taking the zeroth-order moment of the distribution function to be mass fraction, \(Y_{k}\) and neglecting for the time being leading-order errors in the multi-scale analysis one would recover a diffusion term of the form:
\[-\frac{\partial}{\partial x_{\alpha}}\left[\left(\frac{1}{2}-\frac{\bar{\tau }}{\delta t}\right)\frac{\partial Y_{k}}{\partial x_{\alpha}}\right],\]
which using \(\bar{\tau}/\delta t=D_{k}/c_{s}^{2}+1/2\) recovers the generalized Fick approximation. With that the passive scalar approach is confronted with a number of issues:
* This form of diffusion is only valid in the limit of vanishing density changes as in the non-conservative form of the balance equation there is a factor \(1/\rho\) in front of the diffusion term.
* The form of the convection term as recovered in Eq. (105) admits an error of the form \(Y_{k}\partial u_{\alpha}/\partial x_{\alpha}\), which only vanishes for incompressible flows.
* It is well known that the generalized Fick approximation does not conserve overall mass, unless either \(N_{sp}=2\) or \(D_{k}=D\), \(\forall k\). To deal with that issue there are two approaches; If mass fraction of one particular species is dominant everywhere (e.g. that of N\({}_{2}\) in combustion with air) the balance equation for that species is not explicitly solved and one sets \(Y_{\text{N}_{2}}=\sum_{k=0,k\neq\text{N}_{2}}^{N_{sp}-1}\). A more general approach, valid also for non-dilute mixture is to introduce a mass corrector.
* In cases where the driving force of the diffusive flux is a linear function of the variable for which the balance equation is solved, for instance the Fick approximation, the passive scalar model can be used as is. However, for models where the driving force of diffusive flux is a non-linear function that depends on variables other than the zeroth-order moment, for instance the Hirschfelder-Curtiss approximation, the passive scalar approach would lead to errors of the same order as the diffusive flux itself.
A number of different approaches have been proposed in the literature to account for these shortcomings, see for instance [76; 77]. One of the most straightforward approaches, as used in [79; 80], is to put all corrections into a source term. For instance, assuming one targets the mass fraction balance equation with the generalized Fick approximation, the source term \(S\) would be:
\[S=Y_{k}\frac{\partial u_{\alpha}}{\partial x_{\alpha}}+\frac{D_{k}}{\rho} \frac{\partial Y_{k}}{\partial x_{\alpha}}\frac{\partial\rho}{\partial x_{ \alpha}}. \tag{109}\]
Note that this approach as used in [79; 80] still comes short with respect to the mass corrector and the more appropriate diffusion velocity closure. A number of solutions to account for the mass corrector have been proposed in the literature, taking advantage of the non-equilibrium part of the distribution function, see for instance [76].
#### 3.1.4 Hybrid models: Finite difference and finite volume solvers for energy and species
Hybrid models are closest to multiple distribution functions approaches. In multiple distribution functions, let us remind that \(\rho E\) (or an alternative energy form) and \(\rho Y_{k}\) correspond to the zeroth order of separate distribution functions, see Eq. (46).
When the number of species to be considered becomes large - typically \(\mathcal{O}(10-100)\), the memory required to solve all scalars increases very quickly, which may become prohibitive for detailed chemistry descriptions. Hybrid models reduce the memory load by introducing a single scalar for \((\rho E,\rho Y_{k})\) instead of the number of discrete velocities. Each additional conserved scalar \(\rho\phi\) (where \(\phi\) may represent \(E\) or \(Y_{k}\)) is solved by classical finite-difference or finite-volume (FD/FV) schemes, while continuity (1) and momentum (2) equations are still solved via their associated distribution function \(f_{i}\).
Let us now list the main advantages of the hybrid method:
1. The memory footprint is reduced, as only 1 additional scalar needs to be stored for each energy or species equation, vs. 27 for a D3Q27 distribution.
2. They are by construction free of Prandtl, Schmidt or \(\gamma\) numbers limitations, since energy/species resolution is tackled separately.
3. Since they use the same formalism as classical reactive flow solvers for energy and species equations, it is straightforward to take into account combustion-specific terms (turbulent combustion closures, advanced transport models, Soret effect or multi-component diffusion etc.), based on the experience accumulated over many decades using FD/FV solvers.
In turn, hybrid methods suffer from the following drawbacks:
1. Ensuring consistency between the LBM scheme (for continuity and momentum equations) and FD/FV schemes is not straightforward (at the opposite of, e.g. multi-speed or multiple distribution approaches). This can typically lead to disastrous spurious currents, as illustrated later in Fig. 11.
2. FD/FV schemes based only on nearest-neighbor stencils (as used in most LBM solvers) are typically much more dissipative than LBM schemes [81].
The first point is crucial in designing hybrid LBM schemes, and is therefore discussed at length hereafter. The impact of the second point is limited for most applications as long as the vortical and acoustic modes are left within the LBM part of the solver.
_Which form to use for species/energy equations in a hybrid LBM scheme?_. Energy and species equations may be written under a large variety of forms (based on total energy, internal energy, temperature,...). While these forms are indeed equivalent for a continuous formulation, their coupling under discrete form with the LBM scheme may be very different.
Let us recall that for small perturbations and neglecting all dissipation terms (reducing to the multi-component Euler equations), the system (1-4) may be linearized, and each perturbation can be decomposed into the so-called Kovasznay modes [82, 83] (acoustic mode, 3 components of the vorticity modes, entropy mode, and 1 per species).
For instance, the entropy mode of the Euler system follows the equation
\[\frac{\partial s}{\partial t}+u_{\alpha}\frac{\partial s}{\partial x_{\alpha }}=0,\]
and is only weakly coupled with the rest of the system.
For this reason, hybrid methods using an entropy equation were shown to provide reasonable results for moderately compressible flows [84, 85, 86] using several classical convective numerical schemes a priori unrelated with LBM:
* Second-order central difference schemes, potentially blended with upwind, [87, 88, 89, 90]
* Lax-Wendroff scheme [91, 92]
* MUSCL schemes [84, 85, 86]
* Heun scheme [93],
*...
Indeed, for reactive flows, the entropy equation is complex to derive in its general case. However, the enthalpy equation under non-conservative form
\[\rho\frac{\partial h}{\partial t}+\rho u_{i}\frac{\partial h}{\partial x_{i}}= \frac{dP}{dt}-\frac{\partial q_{i}}{\partial x_{i}}+\Pi_{ij}\frac{\partial u_{ i}}{\partial x_{j}} \tag{110}\]
is also a characteristic mode of the system - provided the pressure work \(\frac{dP}{dt}\) is neglected, a very common assumption for low-Mach reactive flows.
Species also directly follow characteristic equations, provided they are written under non-conservative form
\[\rho\frac{\partial Y_{k}}{\partial t}+\rho u_{i}\frac{\partial Y_{k}}{\partial x _{i}}=\frac{\partial}{\partial x_{i}}(\rho Y_{k}V_{k,i})+\dot{\omega}_{k}, \tag{111}\]
There is an alternative but not equivalent way of understanding how crucial this choice is. Consider the species equation in conservative form
\[\frac{\partial\rho Y_{k}}{\partial t}+\frac{\partial\rho u_{i}Y_{k}}{\partial x _{i}}=\rho\left(\frac{\partial Y_{k}}{\partial t}+u_{\alpha}\frac{\partial Y_ {k}}{\partial x_{i}}\right)+Y_{k}\left(\frac{\partial\rho}{\partial t}+\frac{ \partial\rho u_{i}}{\partial x_{i}}\right). \tag{112}\]
It is clear that this equation is the sum of the non-conservative form (a system characteristic) and the continuity equation. Therefore, any numerical error between the continuity equation solved by LBM and the one hidden into the conservative form leads to an inconsistency.
To summarize, provided the equations to be solved using FD/FV are only weakly coupled with the rest of the system, the resulting hybrid LBM solver has been shown to provide reasonable results for a wide number of cases.
Restoring the conservativity of hybrid LBMEquations (110,111) are equivalent to the initial total energy and species equations (3,4), but the discrete formulation is not. This has two disadvantages:
* Global energy conservation is not numerically enforced (while the LBM scheme is numerically mass and momentum preserving).
* Rankine-Hugoniot relationships are not satisfied across discontinuities.
The latter is clearly visible in Fig. 5, which presents a reference 2-D Riemann problem and the solution as obtained with hybrid LBM using Muscl-Hancock scheme to solve the entropy equation.
Figure 5: Two-dimensional problem of Lax & Liu [94]. From left to right: reference solution [94], solution obtained entropy equation (Muscl-Hancock), solution obtained with corresponding total energy equation scheme [92]. Shown are the density fields for configuration 3 at time \(t=0.3\).
Wissocq et al. [92] recently presented a method to construct a linearly equivalent total energy scheme from any FD/FV scheme that is linearly stable for hybrid LBM (e.g., for the entropy equation).
### Compressible continuity and momentum balance equations
All strategies presented above for the resolution of the additional energy and species equations coupled to a LBM solver require modifications to the LBM core. The major options are presented hereafter.
#### 3.2.1 Lattices with higher-order quadratures
Standard approach with polynomial equilibriaAs discussed in the isothermal models sections the Maxwell-Boltzmann phase-space continuous equilibrium can be matched at the discrete level via a number of methods following the same general principle, matching the different moments of the continuous equilibrium with the discretized version. As such the classical moment-matching method routinely used for Eulerian discrete velocity Boltzmann models and truncated Hermite expansion approach both fall into that category. In the case of the former one, once the number of constraints on moments of the equilibrium and degrees of freedom in the form of number of discrete velocities have been set, construction of the discrete equilibria boils down to solving the following linear system:
\[\mathbf{M}\mathbf{f}^{\mathbf{eq}}=\Pi^{\mathrm{MB}}, \tag{113}\]
where \(\Pi^{MB}\) is a vector of size \(1\times Q\), \(Q\) being the number of constraints on moments, with moments of the Maxwell-Boltzmann continuous distribution corresponding to the targeted constraints with:
\[\Pi^{\mathrm{MB}}_{n}=\int v_{x}^{p}v_{y}^{q}v_{z}^{r}f^{\mathrm{MB}}d\mathbf{v}, \tag{114}\]
with \(n=p+q+r\). The quantity \(\mathbf{f}^{\mathbf{eq}}\) is the vector of sizes \(1\times Q\) containing discrete equilibria and \(\mathbf{M}\) the transformation matrix from discrete equilibria to moments. For instance, in 1-D, for a solver targeting the Navier-Stokes-Fourier dynamics a minimum of five discrete velocities are needed as one must correctly recover moments of order zero to four. This approach to construct a discrete solver, while being quite flexible has a number of shortcomings, namely: The matrix \(\mathbf{M}\) is not necessarily invertible for any choice of moments and discrete velocities as illustrated by the introduction of so-called _anti-symmetric_ discrete velocities in some higher-order discrete velocity Boltzmann models, see for instance [95]; while the number of velocities is set by the constraints the sizes and size-ratios of these velocities have no _a priori_ closures and are usually tuned via trial and error. A possible closure for the size of the discrete velocities would be to use Hermite polynomials roots of the corresponding order. The only issue with that choice is that above order three Hermite roots do not guarantee space-filling lattices and therefore on-lattice propagation.
Nevertheless, a large number of publications using larger discrete velocity sets are documented in the literature:
* A group of these publications do not rely on Lagrangian approaches to discretize physical space and time and use Eulerian approaches such as finite differences or finite volumes to discretize the coupled system of hyperbolic equations of the discrete velocity Boltzmann model. In doing so the sizes of discrete velocities can be freely tuned to stabilize the simulation for a specific test-case.
* Another group of publications use Lagrangian approach to discretize physical space and time and overcome the issue of non-space-filling lattices by supplementing the collision-propagation step with
an interpolation step to bring back post-streaming discrete velocities on lattice. These approaches are sometimes referred to as _semi-Lagrangian_, see for instance [96, 97].
* Another category of publications, relying on the classical on-lattice method, proposes to stabilize multi-speed lattice Boltzmann solvers for compressible flows through different collision models, such as multiple-relaxation time or regularized, see for instance [98, 99].
All of the previously-listed models have had limited success in modeling generic high-speed compressible flows with large temperature variations in the domain. A number of alternatives have been proposed since then to considerably widen the stability domain of multi-speed lattice Boltzmann solvers. They will be discussed next.
_Extension of stability domain: Entropic equilibria._ The entropic construction of the discrete equilibrium state introduced for isothermal models, can be reformulated in a more general form as a minimization problem subject to \(M\) constraints:
\[\delta H+\delta(\sum_{m=0}^{M-1}\lambda_{m}\Pi_{m})=0. \tag{115}\]
The formal solution of this constrained minimization leads to a function of the following form:
\[f_{i}^{\rm eq}=\rho w_{i}\exp\Biggl{\{}\left[\sum_{m=0}^{M}\lambda_{m}\left( \sum_{j=0}^{Q-1}\frac{\partial\Pi_{m}}{\partial f_{j}}\right)\right]\Biggr{\}}. \tag{116}\]
Note that other form of the minimizer without the weights \(w_{i}\) have also been proposed and used in the literature [100], most notably for entropic Grad moments methods [100, 101]. For instance, a model imposing only constraints on collisional invariants, i.e.
\[\sum_{i=0}^{Q-1}f_{i}^{\rm eq} =\rho, \tag{117a}\] \[\sum_{i=0}^{Q-1}c_{i\alpha}f_{i}^{\rm eq} =\rho u_{\alpha},\] (117b) \[\sum_{i=0}^{Q-1}c_{i\alpha}^{2}f_{i}^{\rm eq} =\rho\left(u_{\alpha}+DrT\right), \tag{117c}\]
would lead to the following discrete equilibrium [102]:
\[f_{i}^{\rm eq}=\rho w_{i}\exp\bigl{\{}\left[\lambda_{0}+\lambda_{\alpha}c_{i \alpha}+\lambda_{2}\mathbf{c}_{i}^{2}\right]\bigr{\}}. \tag{118}\]
It is interesting to note that while, for the most part, entropic equilibria construction has been done by enforcing constraints on collisional invariants, one may reduce higher-order moments error by adding corresponding constraint in Eq. (115). This is sometimes referred to as _guiding_ the equilibrium and corresponding discrete equilibria are referred to as _guided equilibria_[103, 104]. In the context of the lattice Boltzmann method, this extension of constraints was discussed for the first time in [105] through the concept of auxiliary and target equilibria. There, auxiliary equilibria were constructed by enforcing constraints on collisional
invariants and target equilibria, a combination of auxiliary equilibria and additional degrees of freedom, by enforcing constraints on higher order moments.
Once the form of the equilibrium distribution function has been determined, its construction consists of finding the expression of the different Lagrangian multiplicators. This is done by introducing back the discrete equilibrium into the set of constraints which would lead to a system of \(M\) equations with \(M\) unknowns, i.e. the Lagrange multipliers, to be determined. While an analytical expression was derived for the isothermal case with \(D+1\) constraints, for larger systems no such solutions exist. In the absence of a closed form solution one can use numerical methods such as Newton iterations to find the Lagrange multipliers at every grid-point and every time-step [106].
As shown in previous sections, one systematic approach to choose an optimal set of discrete velocities is to rely on the Gauss-Hermite quadrature and roots of Hermite polynomials. However, apart from the third-order quadrature leading to the DdQ3\({}^{d}\) lattices, all other higher order quadratures result in off-lattice propagation of some of the discrete distribution functions. In [107], starting from a set of discrete velocities the authors proposed an approach to find a reference temperature and corresponding weights. This is achieved through the _closure relation_ and _matching_ conditions. For a set of discrete velocities \(\mathcal{V}\) with \(Q\) vectors \(c_{i}\), the \(Q^{\text{th}}\) power of \(c_{i}\) can be written as a linear combination of lower order odd-powers from \(Q-2\) to \(1\), i.e.
\[c_{i}^{Q}=a_{Q-2}c_{i}^{Q-2}+a_{Q-4}c_{i}^{Q-4}+\cdots+a_{1}c_{i}. \tag{119}\]
For instance, in the case of the D1Q3 lattice one has \(c_{i}^{3}=c_{i}\). This essentially means that the moment of order \(Q\) is not an independent moment and can not be set at one's will. The only possibility is to set the linear in \(u\) term of the \(Q^{\text{th}}\) order to its Maxwell-Boltzmann counter-part and in doing so determine the reference temperature, which is referred to as the _matching_ condition. Consider for instance the D1Q3 lattice again. The third-order moment is going to be \(u_{x}\) while the Maxwell-Boltzmann distribution leads to \(u_{x}^{3}+3T_{0}u_{x}\). To match the linear term one must have \(3T_{0}=1\). Note that not any choice of lattice admits a reference temperature. For example the velocity set \(\mathcal{V}=\{-2,-1,0,+1,+2\}\) will lead to a closure relation of the form \(c_{i}^{5}=5c_{i}^{3}-4c_{i}\) and a matching condition, \(15T_{0}^{2}-15T_{0}+4=0\), which does not admit any solutions. This explains why the shortest admissible five-velocity lattice is \(\mathcal{V}=\{-3,-1,0,+1,+3\}\) with \(T_{0}=1\pm\sqrt{2/5}\). Once the reference temperature is determined, the weights are readily found by matching the moments of the discrete equilibrium at \(\rho=1\) and \(u_{x}=0\) to their Maxwell-Boltzmann counter-parts. Considering the condition of positivity of the weights one also find the range of temperature that can be covered by the chosen system of discrete velocities. The closure relations and reference temperatures of a number of 1-D lattice are summarized in Table 1.
One successful example of such lattices is the D2Q49 shown in Fig. 6. The closure relation for the 1-D set is
\[c_{i}^{7}=14c_{i}^{5}-49c_{i}^{3}+36c_{i}. \tag{120}\]
The 1-D weights read:
\[w_{0} =\frac{36-49T+42T^{2}-15T^{3}}{36}, \tag{121a}\] \[w_{\pm 1} =\frac{T(12-13T+5T^{2})}{16},\] (121b) \[w_{\pm 2} =\frac{T(-3+10T-5T^{2})}{40},\] (121c) \[w_{\pm 3} =\frac{T(4-15T+15T^{2})}{720}, \tag{121d}\]
which lead to \(T_{\min}=1-\sqrt{2/5}\) and \(T_{\max}=1+\sqrt{2/5}\). Note that the range of accessible temperatures can be further extended by changing the ratio of the largest and shortest discrete velocities, here \(\pm 3\) and \(\pm 1\). In [106] the author also proposed pruning strategies to reduce the number of discrete velocities in 2-D and 3-D, leading to the D3Q39 lattice, which reduces the discrete velocities by one order of magnitude compared to the tensor product of the D1Q7, i.e. D3Q343.
Adaptive reference frame modelsAs observed for both isothermal and compressible models, errors in higher-order moments scale with the deviations of local temperature and velocity from the lattice reference temperature and velocity. For all symmetric lattices considered up to that point the lattice reference velocity is \(U=0\). In [108] the authors proposed to challenge the idea of a reference frame at rest by introducing a non-zero shift \(U\). It was noted that the discrete entropy functional is uniquely defined by the weights \(w_{i}\). The weights of a lattice with \(Q\) discrete velocities, as shown in the previous section, are determined by
\begin{table}
\begin{tabular}{l|l|l|l} \(Q\) & \(V\) & Closure & \(T_{0}\) \\ \hline
3 & \(\{0,\pm 1\}\) & \(c_{i}^{3}=c_{i}\) & \(1/3\) \\ \hline
5 & \(\{0,\pm 1,\pm 3\}\) & \(c_{i}^{5}=10c_{i}^{3}-9c_{i}\) & \(1\pm\sqrt{2/5}\) \\ \hline
7 & \(\{0,\pm 1,\pm 2,\pm 3\}\) & \(c_{i}^{7}=14c_{i}^{5}-49c_{i}^{3}+36c_{i}\) & \(0.697953\) \\ \hline
9 & \(\{0,\pm 1,\pm 2,\pm 3,\pm 5\}\) & \(c_{i}^{9}=39c_{i}^{7}-399c_{i}^{5}+1261c_{i}^{3}-900c_{i}\) & \(0.756081\), \(2.175382\) \\ \hline
11 & \(\{0,\pm 1,\pm 2,\pm 3,\pm 4\pm 5\}\) & \(c_{i}^{11}=55c_{i}^{3}-1023c_{i}^{7}+7645c_{i}^{5}-21076c_{i}^{3}+14400c_{i}\) & \(1.062794\) \\ \end{tabular}
\end{table}
Table 1: One-dimensional Maxwell lattices with odd number of integer-valued velocities, \(Q=3,5,7,9,11\). Second column: Lattice vectors; Third column: Closure relation, defining the reference temperature \(T_{0}\) through the matching condition (fourth column).
Figure 6: Illustration of the D2Q49 lattice.
matching the first \(Q\) moments of the Maxwell-Boltzmann equilibrium distribution function at temperature \(T\) and \(u_{x}=0\):
\[\sum_{i=0}^{Q-1}\phi(c_{i})w_{i}(0,rT)=\int\phi(v)f^{\text{MB}}(0,rT)dv. \tag{122}\]
It was shown through the Galilean-invariance of the moments of the Maxwell-Boltzmann distribution function and the binomial theorem that the weights are also Galilean-invariant and therefore untouched by the change of reference frame. The immediate consequences of that observation are: (a) construction of 3-D lattices via tensorial product of the 1-D lattice remains as before, (b) assuming \(U=k\delta x/\delta t\) with \(k\in\mathbb{Z}\) the propagation remains on-lattice and (c) the discrete entropy functional is Galilean invariant and therefore equilibrium populations are form invariant under the shift of reference frame. This point along with the effect of the shift on operation range has also been discussed for standard isothermal lattices [109]. The process of changing the reference frame and resulting discrete lattice is illustrated in Fig. 7 through the D2Q49 lattice. The use of the shifted lattice along with the entropic equilibrium has been successfully used to model a wide variety of high Mach number flows as illustrated in Fig. 8. The idea of shifted reference frames was later generalized to local adaptive reference velocity and temperature through the particles on
Figure 8: Drag coefficient \(c_{d}\) as a function of the free stream Mach number for the Busemann biplane simulations. Inset: snapshots of the pressure distribution around the biplane for three different Mach numbers: \(\text{Ma}=1.5\), top; \(\text{Ma}=1.7\), bottom left; \(\text{Ma}=2.0\), bottom right. Figure reproduced from [102].
Figure 7: Illustration of the D2Q49 lattice with a shift of \(U_{x}=\delta x/\delta t\).
demand (PonD) method [28]. In this approach the collision streaming operation is performed on a reference frame corresponding to the local velocity and temperature. This allows to minimize higher-order moments deviations of the discrete equilibrium from the Maxwell-Boltzmann distribution function and in doing so allows for arbitrarily large variations in speed and temperature. The particles on demand method has been used to model high Mach number cases in recent years [110]. It is also currently used in combination with a Lee-Tarver reaction model to simulate detonation at high Mach numbers, see [111].
#### 3.2.2 Standard lattice density-based solvers
Coupling to temperature fieldAs discussed in the first section, the original lattice Boltzmann method was targeting the incompressible limit as the asymptotic behavior of the incompressible Navier-Stokes equations. To that end the temperature appearing in the equilibrium distribution function was that of a reference state guaranteeing validity of the low Mach assumption. The compressible Navier-Stokes equations can be recovered by replacing the reference temperature with the local fluid temperature obtained from the second distribution function or the FD/FV solver used for energy balance; Considering for instance Eq. 28 it changes into:
\[\zeta_{\alpha\alpha}=c_{s}^{2}\theta+u_{\alpha}^{2}, \tag{123}\]
where \(\theta=\bar{r}T/\bar{r}_{0}T_{0}\). Introducing this term allows for a correct recovery of Euler level pressure while setting the relaxation time to:
\[\bar{\tau}=\frac{\nu}{c_{s}^{2}\theta}+\frac{\delta t}{2}, \tag{124}\]
Figure 9: Mach reflection and regular reflection created from the interaction of incident detonation waves for adiabatic exponent (top) \(\gamma=1.4\), or (bottom) \(\gamma=5/3\). Images reproduced from [111].
allows for correct recovery of the Navier-Stokes level viscous stress coefficient. With the temperature now an independent space- and time-varying parameter, it will inevitably deviate from the reference temperature, i.e. \(\theta=1\) which is the optimal operation temperature of the third-order quadrature-based lattice Boltzmann model. Deviation from the reference temperature comes with a number of difficulties. The first one is the reduced domain of stability, illustrated best by the linear stability domain shown in Fig. 10.
The second is that to properly recover the full Navier-Stokes viscous stress tensor a number of additional considerations have to be taken into account. These are discussed in the next paragraph.
Galilean-invariance of third-order moments and corrections.As discussed for the isothermal lattice Boltzmann method in previous sections, a simple CE analysis shows that at the NS level, moments of orders two and three of the EDF must be correctly recovered. Diagonal components of the third-order moments tensor, i.e. moments of the form \(\Pi_{\alpha\alpha\alpha}\), can not be correctly recovered due to the \(c_{i\alpha}^{3}=c_{i\alpha}\) bias of the third-order quadrature-based lattice. While the continuous Maxwell-Boltzmann equilibrium distribution leads to:
\[\Pi_{\alpha\alpha\alpha}^{\rm MB}=\rho u_{\alpha}^{3}+3\rho c_{s}^{2}u_{\alpha }\theta, \tag{125}\]
any of the discrete equilibrium distribution functions discussed here recovers:
\[\Pi_{\alpha\alpha\alpha}^{\rm eq}=3\rho c_{s}^{2}, \tag{126}\]
which for \(\theta=1\) introduces a cubic-in-velocity error and for \(\theta\neq 1\) a linear one. As such the issue of Galilean-variance of the third-order moments becomes quite critical in the case of compressible flows where \(\theta\neq\rm const\). To account for this error, corrections in the form of source terms in the kinetic equation are introduced:
\[\partial_{t}f_{i}+c_{i\alpha}\frac{\partial f_{i}}{\partial x_{\alpha}}=\frac {1}{\tau}\left(f_{i}^{\rm eq}-f_{i}\right)+\Psi_{i}. \tag{127}\]
Figure 10: Linear stability domain of lattice Boltzmann at different non-dimensional temperatures and viscosities as obtained from von Neumann analysis. Reproduced from [112].
The form of the source term, \(\Psi_{i}\), can be derived through the order-two-in-\(\epsilon\) (NS level) momentum balance equation:
\[\frac{\partial^{(2)}\rho u_{\alpha}}{\partial t}+\frac{\partial}{\partial x_{ \beta}}\tau\left(\frac{\partial^{(1)}\Pi^{\rm eq}_{\alpha\beta}}{\partial t}+ \frac{\partial\Pi^{\rm eq}_{\alpha\beta\gamma}}{\partial x_{\gamma}}\right) +\frac{\partial}{\partial x_{\beta}}\tau\left(\sum_{i=0}^{Q-1}c_{i\alpha}c_{i \beta}\Psi_{i}^{(1)}\right)=0, \tag{128}\]
leading to:
\[\Psi_{i}^{(1)}=\frac{w_{i}}{2c_{s}^{4}}\frac{\partial_{\alpha}}{\partial x_{ \alpha}}\mathcal{H}_{\alpha\alpha}(\mathbf{c}_{i})\delta\Pi^{\rm eq}_{\alpha\alpha \alpha}, \tag{129}\]
where
\[\delta\Pi^{\rm eq}_{\alpha\alpha\alpha}=\rho u_{\alpha}\left[u_{\alpha}^{2}+3 c_{s}^{2}\left(\theta-1\right)\right]. \tag{130}\]
For the stress tensor to be correctly recovered at this scale one must have:
\[\Psi_{i}=\frac{w_{i}}{2c_{s}^{4}}\partial_{\alpha}\mathcal{H}_{i,\beta\gamma }\delta\Pi^{\rm eq}_{\alpha\beta\gamma}. \tag{131}\]
Note that to get this expression the correction term was assumed to involve first-order derivatives via the expansion \(\Psi_{i}=\Psi_{i}^{(1)}\).
A different form of the correction term can be obtained with a different expansion, i.e. \(\Psi_{i}^{\prime}=\Psi_{i}^{(2)}\). Such an expansion would lead to the following NS-level equation:
\[\frac{\partial^{(2)}\rho u_{\alpha}}{\partial t}+\frac{\partial}{\partial x_{ \beta}}\tau\left(\frac{\partial^{(1)}\Pi^{\rm eq}_{\alpha\beta}}{\partial t}+ \frac{\partial\Pi^{\rm eq}_{\alpha\beta\gamma}}{\partial x_{\gamma}}\right) -\sum_{i=0}^{Q-1}c_{i\alpha}{\Psi^{{}^{\prime}}_{i}}^{(2)}=0, \tag{132}\]
and a correction term of the form:
\[\Psi_{i}^{{}^{\prime}}=\frac{w_{i}}{c_{s}^{2}}c_{i\alpha}\frac{\partial}{ \partial x_{\alpha}}\left(\frac{\mu}{p}\frac{\partial\delta\Pi^{\rm eq}_{ \alpha\alpha\alpha}}{\partial x_{\alpha}}\right). \tag{133}\]
The above-listed corrections were derived for the discrete kinetic equations. The classical lattice Boltzmann approach to space/time discretization would lead to the following redefined discrete distribution function:
\[\bar{f}_{i}=f_{i}-\frac{\delta t}{2}\Omega_{i}-\frac{\delta t}{2}\Psi_{i}, \tag{134}\]
which in turn would lead to the following final algebraic system:
\[\bar{f}_{i}\left(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t\right)-\bar{f}_{i}\left( \mathbf{x},t\right)=\frac{\delta t}{\bar{\tau}}\left(f_{i}^{\rm eq}\left(\mathbf{x},t \right)-\bar{f}_{i}\left(\mathbf{x},t\right)\right)+\left(1-\frac{\delta t}{2\bar {\tau}}\right)\Psi_{i}. \tag{135}\]
This consistent derivation of the _extended_ LBM holds for any realization of the correction term, whether it is introduced simply as a Hermite-expanded term [52] or by using the extended equilibrium approach [27].
#### 3.2.3 Pressure-based solvers
While the density-based model detailed in the previous section was successfully used for a number of applications, see [88; 113], it was observed that it led to spurious currents near curved flame interfaces, see Fig. 11 for a circular flame. A detailed study of numerical properties of the model showed that this is a result of a non-physical coupling between entropic and vorticity mode, see [114]. A pressure-based formulation
was then proposed [84] to cure the problem, the detailed reason only being understood later [85].
The pressure-based algorithm is presented hereafter.
**Step 1**: Calculation of the \(p\)-based equilibrium distribution \(f_{i}^{p,eq}\) from \((t,\mathbf{x})\) moments :
\[f_{i}^{p,eq}(t,\mathbf{x})=\omega_{i}\Big{\{}\mathcal{H}^{(0)}\rho\theta+\frac{ \mathcal{H}_{i\alpha}^{(1)}}{c_{s}^{2}}\rho u_{\alpha}+\frac{\mathcal{H}_{i \alpha\beta}^{(2)}}{2c_{s}^{4}}[\rho u_{\alpha}u_{\beta}]+\frac{\mathcal{H}_{ i\alpha\beta\gamma}^{(3)}}{6c_{s}^{6}}[\rho u_{\alpha}u_{\beta}u_{\gamma}] \Big{\}}(t,\mathbf{x})\,. \tag{136}\]
**Step 2**: Off-equilibrium population reconstruction \(\overline{f}_{i}^{neq}(t,\mathbf{x})\) from moments \(\left[\rho,\rho u_{\alpha},\Pi_{\alpha\beta}^{neq}\right](t,\mathbf{x})\), using a collision model, e.g. projected [115] or recursive [98; 99] regularization.
**Step 3**: Collision and streaming,
\[f_{i}^{p,col}(t,\mathbf{x})=f_{i}^{p,eq}(t,\mathbf{x})+\left(1-\frac{ \delta t}{\overline{\tau}}\right)\overline{f}_{i}^{neq}(t,\mathbf{x})\,, \tag{137}\] \[\overline{f}_{i}^{p}(t+\delta t,\mathbf{x})=f_{i}^{p,col}(t,\mathbf{x}- \mathbf{c}_{i}\delta t)\,. \tag{138}\]
**Step 4**: macroscopic reconstruction
\[\rho(t+\delta t,\mathbf{x})=\sum_{i}\overline{f}_{i}^{p}(t+\delta t, \mathbf{x})+\rho(t,\mathbf{x})[1-\theta(t,\mathbf{x})]. \tag{139}\] \[\rho u_{\alpha}(t+\delta t,\mathbf{x})=\sum_{i}c_{i\alpha}\overline{ f}_{i}^{p}(t+\delta t,\mathbf{x})\,,\] (140) \[\Pi_{\alpha\beta}^{\overline{f}^{neq}}(t+\delta t,\mathbf{x})=\sum_{i }c_{i\alpha}c_{i\beta}\left[\overline{f}_{i}^{p}-f_{i}^{p,eq}\right](t+\delta t,\mathbf{x})\,, \tag{141}\]
Update of the energy variable (hybrid or DDF method) [116; 117; 118; 84]. From this additional step, \(\theta(t+\delta t,\mathbf{x})\) is now updated.
Figure 11: Streamlines of the 2-D circular flame simulation colored by velocity magnitude (in m/s). Left column: density-based model [88], right column: pressure-based model [87]. Note the very different ranges of velocity magnitude from the two methods. The yellow contour is the heat release rate peak indicating the flame front.
Note the differences compared to the density-based model presented in the previous section:
* The zeroth moment of \(f_{i}\) is now \(p=\rho\theta\) instead of \(\rho\).
* The macroscopic reconstruction of Step 4 now includes a correction \(\rho(t+\delta t,\mathbf{x})=\sum_{i}\overline{f}_{i}^{p}(t+\delta t,\mathbf{x})+\rho(t, \mathbf{x})[1-\theta(t,\mathbf{x})]\) accounting for dilatation.
This second point was presented by the authors as a predictor-corrector procedure, close to early artificial compressibility methods [119, 120]. It is important to note, however, that despite being pressure-based, the algorithm is mass conserving globally, as density-based methods.
Link between pressure-based and density based formulationsSince many mesh transition algorithms, boundary conditions, etc. were initially developed for density-based algorithms [121], there is an interest in establishing a rigorous link between pressure-based and density-based algorithm.
This can be obtained noting that the correction \(\rho(t,\mathbf{x})[1-\theta(t,\mathbf{x})]\) in the macroscopic reconstruction can be equivalently embedded directly in the \(f_{0}\) term corresponding to the stationary discrete velocity by introducing the density-based function:
\[\overline{f}_{i}^{\rho}(t+\delta t,\mathbf{x})=\overline{f}_{i}^{p}(t+\delta t, \mathbf{x})+\delta_{0i}[1-\theta(t,\mathbf{x})], \tag{142}\]
where \(\delta_{0i}\) is the Kronecker symbol. By projecting \(\delta_{0i}\) on a Hermite polynomial basis, it was further shown [85] that this change is equivalent to adding to the classical \(\overline{f}_{i}^{\rho}\) a fourth order contribution, leading to an equilibrium function of the generic form
\[f_{i}^{eq}=\omega_{i}\Big{\{} \mathcal{H}^{(0)}\rho+\frac{\mathcal{H}_{i\alpha}^{(1)}}{c_{s}^{ 2}}\rho u_{\alpha}+\frac{\mathcal{H}_{i\alpha\beta}^{(2)}}{2c_{s}^{4}}\left[ \rho u_{\alpha}u_{\beta}+\delta_{\alpha\beta}\rho c_{s}^{2}(\theta-1)\right]+ \frac{\mathcal{H}_{i\alpha\beta\gamma}^{(3)}}{6c_{s}^{6}}\Big{[}\rho u_{\alpha }u_{\beta}u_{\gamma}\] \[-\kappa\rho c_{s}^{2}\left(u_{\alpha}\delta_{\beta\gamma}+u_{ \beta}\delta_{\gamma\alpha}+u_{\gamma}\delta_{\alpha\beta}\right)\Big{]}- \frac{\mathcal{A}_{i}+\mathcal{B}_{i}+\mathcal{C}_{i}}{12c_{s}^{4}}\rho[ \theta-1](1-\zeta)\Big{\}}\,, \tag{143}\]
with additional information projected onto fourth order polynomials \(\mathcal{A}_{i}\), \(\mathcal{B}_{i}\) and \(\mathcal{C}_{i}\). In the model, \(\kappa\) and \(\zeta\) are free parameters. For instance, \((\kappa,\zeta)=(1,1-\theta)\) corresponds to the density-based model of [117], while \((\kappa,\zeta)=(0,0)\) yields the pressure-based model of (136).
Successful applications of the resulting generic model include the modelling of
* Hele-Shaw cell [87];
* Turbulent premixed combustion burners; [90, 122]
* Turbulent lifted H\({}_{2}\)-air jet flame [123];
* Thermo-acoustic instabilities [122, 124];
* Cellular detonation structure [93];
with the last two points being illustrated in Fig. 12.
#### 3.2.4 Low Mach thermo-compressible pressure-based solver
In 2019, Hosseini et al. proposed a low Mach lattice Boltzmann scheme for simulations of combustion, and more generally of dilatable flows [125]. A low Mach reduction of the fully compressible models of the previous section was also proposed in [126]. The scheme is categorized as low Mach in the sense that it follows the philosophy of Majda's zero-Mach model [127] where after a Helmholtz decomposition of the velocity field, the divergence-free part is obtained via Poisson's equation and the curl-free part from the species and energy fluxes. At the difference of Majda's zero-Mach model, here the _divergence-free_ component solver allows for a certain level of compressibility, i.e. spurious acoustic waves. To recover this modified set of macroscopic equations the model makes use of the following modified kinetic system [125]:
\[\frac{\partial g_{i}^{\prime}}{\partial t}+c_{i\alpha}\frac{\partial g_{i}^{ \prime}}{\partial x_{\alpha}}=\frac{1}{\tau}\left({g_{i}^{\mathrm{eq}}}^{ \prime}-g_{i}^{\prime}\right)+\Xi_{i}, \tag{144}\]
where
\[g_{i}^{\prime}=w_{i}p_{h}+c_{s}^{2}\left(f_{i}-w_{i}\rho\right), \tag{145}\]
and the source term \(\Xi_{i}\) is defined as:
\[\Xi_{i}=c_{s}^{2}\left(\frac{f_{i}^{\mathrm{eq}}}{\rho}-w_{i}\right)\left(c_ {i\alpha}-u_{\alpha}\right)\frac{\partial\rho}{\partial x_{\alpha}}+w_{i}c_{ s}^{2}\rho\frac{\partial u_{\alpha}}{\partial x_{\alpha}}. \tag{146}\]
In this model, hydrodynamic pressure \(p_{h}\) and velocity \(\mathbf{u}\) are coupled via the distribution function via:
\[\sum_{i=0}^{Q-1}g_{i}^{\prime} =p_{h}, \tag{147a}\] \[\sum_{i=0}^{Q-1}c_{i\alpha}g_{i}^{\prime} =\rho c_{s}^{2}u_{\alpha}, \tag{147b}\]
Figure 12: Illustration of successful applications of the generic HRR model (143): (left) a cycle of a thermo-acoustic instability in the PRECCINSTA burner [122], and (right) 2-D detonation cellular structure with varying activation energy [93].
while density \(\rho\) is now a variable computed locally through the ideal equation of state:
\[\rho=\frac{p_{th}}{\bar{r}T}. \tag{148}\]
The source of divergence appearing in Eq. (146) is computed via the continuity equation combined with the energy and species balance equations:
\[\frac{\partial u_{\alpha}}{\partial x_{\alpha}}=-\frac{1}{p_{th}}\frac{dp_{th}} {dt}+\frac{1}{T}\left(\frac{\partial T}{\partial t}+u_{\alpha}\frac{\partial T }{\partial x_{\alpha}}\right)+\sum_{k=1}^{N_{sp}}\frac{\bar{W}}{W_{k}}\frac{1 }{T}\left(\frac{\partial Y_{k}}{\partial t}+u_{\alpha}\frac{\partial Y_{k}}{ \partial x_{\alpha}}\right), \tag{149}\]
where summation of \(\alpha\) is assumed. A multi-scale analysis of this kinetic model shows that the following balance equation is effectively applied to the hydrodynamic pressure [34]:
\[\frac{1}{\rho c_{s}^{2}}\partial_{t}p_{h}+\frac{\partial u_{\alpha}}{\partial x _{\alpha}}=-\frac{1}{p_{th}}\frac{dp_{th}}{dt}+\frac{1}{T}\left(\frac{ \partial T}{\partial t}+u_{\alpha}\frac{\partial T}{\partial x_{\alpha}} \right)+\sum_{k=1}^{N_{sp}}\frac{\bar{W}}{W_{k}}\frac{1}{T}\left(\frac{ \partial Y_{k}}{\partial t}+u_{\alpha}\frac{\partial Y_{k}}{\partial x_{ \alpha}}\right), \tag{150}\]
while for momentum, as for the classical lattice Boltzmann with second-order equilibrium, the Navier-Stokes equation is recovered with a deviation of order \(\propto\left\|\mathbf{u}\right\|^{3}\delta t^{3}/\delta x^{3}\) in both diagonal and deviatoric components of the viscous stress tensor.
Note that after integration along characteristics, the discrete time evolution equations for the re-defined distribution function \(\vec{g^{\prime}}_{i}\) are obtained as:
\[\vec{g^{\prime}}_{i}(\mathbf{x}+\mathbf{c}_{i}\delta t,t+\delta t)-\vec{g^{\prime}}_{ i}(\mathbf{x},t)=\frac{\delta t}{\bar{\tau}}\left(\vec{g^{\prime}}_{i}^{\rm eq}( \mathbf{x},t)-\vec{g^{\prime}}_{i}(\mathbf{x},t)\right)+\left(1-\frac{\delta t}{2\bar{ \tau}}\right)\Xi_{i}, \tag{151}\]
while moments are computed as:
\[\sum_{i=0}^{Q-1}\vec{g^{\prime}}_{i}+\frac{\delta tc_{s}^{2}}{2} \left(\rho\frac{\partial u_{\alpha}}{\partial x_{\alpha}}+u_{\alpha}\frac{ \partial\rho}{\partial x_{\alpha}}\right)=p_{h}, \tag{152a}\] \[\sum_{i=0}^{Q-1}c_{i\alpha}\vec{g^{\prime}}_{i}=\rho c_{s}^{2}u_ {\alpha}. \tag{152b}\]
While only the most basic single relaxation time form of this model is introduced here, interested readers can readily get access to more advanced realizations, for instance via the Cumulants-based multiple relaxation time collision operator in [128, 32].
Over the past couple of years this low Mach lattice Boltzmann solver, in combination with shock-capturing finite differences schemes like the weighted essentially non-oscillatory (WENO) have been used to model a variety of complex reacting flow configurations, including
* Turbulent combustion [129];
* Combustion in swirl burner [128];
* Combustion in porous media [130].
Some of these applications are illustrated in Fig. 13. A simpler form of this model was also used in [131] to
model droplet combustion. In terms of limitations, the model is - as obvious by its name - limited to the low Mach regime. Furthermore, as mentioned previously, the Navier-Stokes level viscous stress tensor admits deviation, for which corrections are to be published in an upcoming article. Finally, exact conservation is a topic that needs improvement here as the form of the energy and species balance equations along with the finite-difference discretizations and curved boundary treatment all lead to loss of exact conservation.
### Performance of multi-physics LBM solvers
Let us now provide details regarding the advantages and limitations of multi-physics LBM solvers, and how they compare to classical LBM solvers targeting mainly low-Mach aerodynamic and aero-acoustic applications.
Classical LBM solvers have found their success due to three main reasons : (i) ability to tackle complex geometries in a simple way, (ii) low dissipation owing to the streaming step, and (iii) a reduced computational cost due to the stencil compactness and octree structure. These advantages come at the cost of a heavier memory load (with more degrees of freedom), a complex treatment of boundary conditions (owing to the non body-fitted mesh), and filtering problems in grid refinement areas where both spatial and time discretizations are halved/doubled.
Let us now consider the three aforementioned advantages and see if they apply to LBM multiphysics solvers. (i) Ability to tackle complex geometries is preserved, as the discretization remains identical. (ii) The low dissipation property is more model dependent. For multiple distribution functions, each distribution is solved using the same algorithm so that dissipation is supposed to remain identical. For hybrid approaches, however, a separate numerical scheme is used for energy and species equations, which may lead to additional dissipation [84] on the entropy/enthalpy Kovasznay modes. Comparing the computational cost (iii) between LBM and Navier-Stokes solvers is a long standing issue. While a thorough comparison is still lacking, some
Figure 13: Illustration of some of the recent applications of the low Mach model of [125]. (a) Simulation of PRECCINSTA swirl injector at equivalence ratio 0.83 [128], the red surface illustrating the flame structure. (b) Simulation of deflagrating flame in a chamber with obstacles [128]. (c) Simulation of flame propagation in randomly-generated porous media [130].
LBM studies report reduced computational times (RCT), consistently below \(5\mu\)s per time step and grid point for relevant combustion problems [89; 90; 122; 132].
We list in Tab. 2 different references tackling combustion problems using multi-physics LB solvers (regardless of the strategy). This fast-expanding list is a clear indication that multi-physics LBM solvers have now reached sufficient maturity for combustion applications.
## 4 Conclusion and discussion
While extension of the lattice Boltzmann method to the simulation of combustion happened slower than other areas of application such as incompressible and multi-phase flows, the progress documented in recent years has laid the ground for widespread use and application of lattice Boltzmann to combustion. The complex simulations reported in the literature, see for instance [93; 122; 128; 130], show that the numerical approach has reached a level of maturity allowing it to be applied to realistic configurations. In this contribution we focused on the development of lattice Boltzmann-based approaches to model combustion and discussed some of the most pressing challenges and solutions proposed in the literature.
The evolution of the literature shows that one of the major challenges preventing progress in that area was the development of stable and efficient solutions to extend the lattice Boltzmann solvers to compressible regimes. Stability has been one of the most restricting issues in that area. While use of higher order lattices is one straightforward approach to move up to compressible flows, it has been observed that apart from the additional memory and computation costs stemming from the larger number of discrete velocities, higher-order quadrature are subject to more restrictive stability conditions, especially on the temperature. This has led the community to opt for approaches relying on low-order quadratures, i.e. third-order, which had shortcomings regarding the Navier-Stokes-level viscous stress tensor due to insufficient degrees of freedom in the model. Introduction of correction terms for the viscous stress tensor along with more robust collision operators has now paved the way for simulations involving large temperature variations.
Other issues specific to the simulation of combustion in the context of the lattice Boltzmann method are tied to additional balance equations for species and energy. A number of different strategies have been devised for these additional fields. Some rely on developing kinetic models and, therefore, lattice Boltzmann solvers for multi-species fluids, while others prefer either lattice-Boltzmann-based passive scalar solvers or classical FV/FD solvers for the balance equations, leading to a hybrid formulation.
While state-of-the-art lattice Boltzmann solvers are now routinely used for complex combustion configurations involving complex geometries and turbulent flows, a number of technical challenges still persist:
\begin{table}
\begin{tabular}{l|c} Canonical configuration & References \\ \hline Laminar premixed and diffusion flame & [20; 34; 113; 125] \\ Reacting Taylor-Green vortex & [89; 129] \\ Circular expanding flames & [133] \\ Darrieus-Landau instabilities & [87; 88] \\ Thermo-acoustic instabilities & [122; 124] \\ Turbulent premixed burner (LES) & [90; 122; 128] \\ Flame in porous media & [130] \\ Detonations & [93; 111] \\ \end{tabular}
\end{table}
Table 2: A list of canonical combustion problems treated using lattice Boltzmann methods in the recent literature
* One of the remaining challenges is to get exactly conservative curved boundary conditions for the lattice Boltzmann solver. While the bare half-way bounce-back method resulting in a stair-case approximation to the geometry [134, 135] ensures mass conservation, all curved treatment presented in the literature, see for instance [136, 137, 138], result in loss of conservativity of the boundary condition. For a more detailed discussed on conservation issues for curved boundary treatment interested readers are referred to [29, 139, 140]. A number of routes can be taken to overcome this issue such as the use of immersed boundaries which would come at the cost of diffuse interfaces, or the use of volumetric/fluxed based boundary treatments, see for instance [141, 142].
* Development and implementation of conservative and efficient dynamic grid-refinement strategies is also another topic to be further developed in the future. Although grid-refinement in the context of the lattice Boltzmann method has been developed and used since the early 2000's, see for instance [143, 144, 145, 146, 147], mass-conservation, spurious currents at refinement interfaces and dynamic refinement are still topics of discussion in the literature.
At the end of this detailed review regarding past achievements obtained for combustion thanks to LB simulations, it is now time to look briefly to the future. This work is obviously not the first review concerning lattice Boltzmann simulations, and previous studies have sometimes included long-term perspectives, most prominently in the work by Succi [148] - recently updated in [149]. It is obviously necessary to evaluate again recent evolutions in the light of those predictions. One must keep in mind that applications are in the focus of the present review, as stated in the title, so that it is does not appear meaningful to include exceedingly exotic concepts here.
In the same manner, the future of high-fidelity combustion simulations has been discussed in previous reviews, for instance [150, 151, 152]. Here also, reflecting on the corresponding statements will be useful. Of course, the aspects already discussed at length in the core of the present article will not be repeated here in the interest of space. Since the focus has been placed on important methodological aspects of LB in this review, a bird's view seems more appropriate to finish. As such, the main points emerging for the foreseeable future would read as follows.
* or related methods like Volume of Pixel
- appears as a promising solution [155].
* Multi-physics and multiscale applications, adaptivity: the growing performance of existing computational platforms coming with the ever-increasing complexity of the target applications, problems
involving a variety of physical and chemical processes - mostly coupled in a strongly non-linear manner - and taking place over a broad diversity of scales in time and space progressively become the rule. While concentrating here on combustion applications, it must still be recognized that LB has now virtually been used successfully for almost any kind of flows (and even beyond fluid mechanics). The next step - certainly soon to be reached - will for example involve accurate LB simulations of turbulent multiphase reacting flows including realistic chemistry, thermodynamics, and transport models for species and heat, up to radiative heat transfer. Such multi-physics configurations typically involve a broad range of relevant scales in time and space [156], from micrometers to centimeters for living beings [157], or even "from inside protons to the outer Universe", citing the optimistic statement of [149]. In that case, an optimal combination of different numerical approaches will become necessary to allow for a numerical solution of the full configuration, for instance by coupling LB to Molecular Dynamics simulations [158, 159, 160]. Multiscale issues can be partly mitigated by using adaptivity - in particular in space for LB (local grid refinement/coarsening), a solution that has already been discussed in this review [161, 122].
* usually taking place in the turbulent regime
- two lines of research will certainly be followed in the near future. One will concentrate on the thermoreactive part of the problem, for instance using Deep Neural Networks to describe kinetics, potentially leading to very large savings in computing time and memory [162]. The other line will concentrate on ML at subgrid scale (SGS) when using LB for Large-Eddy Simulations (LES) of turbulent reacting flows [122, 128]. In that case, either the behaviour of the pure turbulent flow will be described by a SGS model based on ML [163]
- an approach now well established in conventional CFD [164]; or ML could be used to represent directly turbulent/combustion coupling at subgrid scale, an even more challenging but potentially more rewarding solution, for which much remains to be done [165].
* New computational architectures: Though this might not be immediately clear for young scientists, the performance of a numerical approach is not constrained only by considerations from applied mathematics (stability, convergence, dissipation, dispersion), but is also directly impacted by the details of the computational architecture on which large-scale simulations are carried out. In that sense, the comparison of different methods in terms of computational performance completely depends on the employed system. While method 1 might be order-of-magnitude faster than method 2 on a conventional, single-core system, the comparison might be completely reversed on a large, fine-grain parallel computer, or when computing on Graphical Processing Units (GPU). In that sense, an essential advantage of LB (in addition to the error-free streaming step and linearity of the basic operator) is its locality. In the standard LB formulation, only first-neighbour communications are requested, making it perfectly suited for the currently employed computer architectures; this is one essential explanation to understand the growing success of LB for many applications. Nobody knows which computer architecture will dominate in 20 years. In his review from 2015, Succi [148] already mentioned the suitability of LB for Quantum Computing (QC). Indeed, porting high-order CFD methods involving unstructured grids on QC systems sounds like a nightmare, and LB could here again profit from its
apparent simplicity and locality. Lattice Boltzmann on Quantum Computers is a subject of current research [166; 167]. Still, QC systems being currently barely available for researchers, the impact of Quantum Computing on future LB simulations cannot be reliably estimated. The same applies to even more exotic architectures like biological computers, that have not even entered a preliminary test-phase.
## Acknowledgements
S.A.H. and I.K. would like to acknowledge the financial support of European Research Council (ERC) through the Advanced Grant no. 834763-PonD and computational resources provided by the Swiss National Supercomputing Center CSCS under grant no. s1212. P.B. acknowledges financial support from the French National Research agency (grants ANR-20-CE05-0009 & ANR-21-CE05-0028), as well as computational resources provided by Centre de Calcul Intensif d'Aix-Marseille and GENCI, France (Grant A0132B11951). D.T. acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in TRR 287 (Project-ID No. 422037413).
|
2308.00151 | Classical stochastic representation of quantum mechanics | We show that the dynamics of a quantum system can be represented by the
dynamics of an underlying classical systems obeying the Hamilton equations of
motion. This is achieved by transforming the phase space of dimension $2n$ into
a Hilbert space of dimension $n$ which is obtained by a peculiar canonical
transformation that changes a pair of real canonical variables into a pair of
complex canonical variables which are complex conjugate of each other. The
probabilistic character of quantum mechanics is devised by treating the wave
function as a stochastic variable. The dynamics of the underlying system is
chosen so as to preserve the norm of the state vector. | Mário j. de Oliveira | 2023-07-31T21:02:43Z | http://arxiv.org/abs/2308.00151v1 | # Classical stochastic representation of quantum mechanics
###### Abstract
We show that the dynamics of a quantum system can be represented by the dynamics of an underlying classical systems obeying the Hamilton equations of motion. This is achieved by transforming the phase space of dimension \(2n\) into a Hilbert space of dimension \(n\) which is obtained by a peculiar canonical transformation that changes a pair of real canonical variables into a pair of complex canonical variables which are complex conjugate of each other. The probabilistic character of quantum mechanics is devised by treating the wave function as a stochastic variable. The dynamics of the underlying system is chosen so as to preserve the norm of the state vector.
The earliest formulations of quantum mechanics were given by Schrodinger, who introduced the quantum wave equation that bears his name, and by Heisenberg who introduced the quantum matrix mechanics. These two formulations were shown to be equivalent and, together with other formulations, are understood as different representations of the same theory. The standard formulation of quantum mechanics considers that the quantum state is a vector with complex components belonging to the Hilbert vector space. The time evolution of the quantum state is given by a unitary transformation whose generator is the Hamiltonian operator acting on the Hilbert space. This type of evolution guarantees that the norm of the state vector is preserved for all times.
Quantum mechanics [1; 2; 3; 4; 5; 6; 7; 8] as a science of motion differs fundamentally from classical mechanics [9; 10; 11; 12]. For instance, the mathematical objects corresponding to real physical quantities such as position and momentum are very distinct in the two theories. In quantum mechanics they are operators acting on a Hilbert space and the possible outcomes of an observable are the eigenvalues of the corresponding operator. In fact classical and quantum mechanics are conflicting scientific theories of the same real phenomena and just one of them could give the correct predictions. It is admitted indeed that classical mechanics does not correctly describes nature at small scales. At large scales quantum mechanics is reduced to classical mechanics and both predict the same results.
The question we address here is not whether the science of quantum mechanics is equivalent to the science of classical mechanics, as they are not. The question we address is whether the abstract framework of quantum mechanics can in some sense be equivalent to the abstract framework of classical mechanics. We answer this question positively by showing that the dynamics of a state vector belonging to a Hilbert space of dimension \(n\) is equivalent to the dynamics of a classical system with \(n\) degrees of freedom. This classical system we call the _underlying_ system to avoid confusion with a real system described by classical mechanics. The wave equation is then understood as related to a pair of classical canonical variables, its real and imaginary parts being proportional to the coordinate and momentum, respectively. The underlying system cannot be any classical system, but only those whose motion preserves the norm of the complex wave function.
The idea of expressing classical mechanics in a Hilbert space that we use here was considered by Koopman [13] who showed that canonical transformation are equivalent to unitary transformation if the state functions in phase space are square integrable [14]. This result was also used by von Neumann to formulate classical mechanics as an operational theory [15].
Quantum mechanics has a probabilistic character that is particularly manifest in the standard interpretation of quantum mechanics according to which the square of the absolute value of the wave function is a probability. Here, the probabilistic character of thermodynamics is devised by considering that the wave function is a stochastic variable, that is, a time dependent random variable. Accordingly, the wave vector in the Hilbert follows a stochastic trajectory. In this sense, the present stochastic representation is in accordance with the consistent history interpretation of quantum mechanics [8].
Let us consider the representation of classical mechanics by the Hamilton equations of motion. In this representation, a state is defined as a vector of the phase space spanned by the canonical variables. The dimension of the vector phase space equals \(2n\) where \(n\) is the number of degrees of freedom, which is the number of pairs of canonically conjugate variables. The canonical Hamilton equations of motion are given by
\[\frac{dq_{k}}{dt}=\frac{\partial\mathcal{H}}{\partial p_{k}},\qquad\frac{dp_{k }}{dt}=-\frac{\partial\mathcal{H}}{\partial q_{k}}, \tag{1}\]
where \(\mathcal{H}\) is the Hamiltonian function and \((q_{k},p_{k})\) denotes one of the \(n\) pairs of canonically conjugate variables.
The pairwise formulation of the canonical equations of motion allows a _peculiar_ transformation [9] of the pair of real canonical variables \((q_{k},p_{k})\) to a pair of complex canonical variables \((z_{k},z_{k}^{*})\). This peculiar transformation is accomplished by \(z_{k}=\alpha_{k}q_{k}+i\beta_{k}p_{k}\) where \(\alpha_{k}\) and \(\beta_{k}\) are real constants such that \(\alpha_{k}\beta_{k}=1/2\mu\), and \(\mu\) is some constant with the physical dimension of coordinate\(\times\)momentum. This transformation guarantees that the pair \((z_{k},z_{k}^{*})\) is a pair of canonically conjugate variables. In terms of the new variables the equations of motion become
\[i\mu\frac{dz_{k}}{dt}=\frac{\partial\mathcal{H}}{\partial z_{k}^{*}},\qquad i \mu\frac{dz_{k}^{*}}{dt}=-\frac{\partial\mathcal{H}}{\partial z_{k}}, \tag{2}\]
where \(z_{k}\) and \(z_{k}^{*}\) are dimensionless and treated as independent variables, and \(\mathcal{H}\) is a real function of the set of variables \(\{z_{k}\}\) and \(\{z_{k}^{*}\}\). The Hamilton equations can also be written in terms of Poisson brackets
\[i\mu\frac{dz_{k}}{dt}=\{z_{k},\mathcal{H}\},\qquad i\mu\frac{dz_{k}^{*}}{dt}=\{z _{k}^{*},\mathcal{H}\}. \tag{3}\]
The Poisson brackets between two state functions \(\mathcal{A}\) and \(\mathcal{B}\) are defined by
\[\{\mathcal{A},\mathcal{B}\}=\sum_{j}\left(\frac{\partial\mathcal{A}}{\partial z _{j}}\frac{\partial\mathcal{B}}{\partial z_{j}^{*}}-\frac{\partial\mathcal{B} }{\partial z_{j}}\frac{\partial\mathcal{A}}{\partial z_{j}^{*}}\right), \tag{4}\]
and we remark that \(\{z_{j},z_{k}^{*}\}=\delta_{jk}\).
The time evolution of a state function \(\mathcal{A}\), that is, as a function of the set of variables \(\{z_{j}\}\) and \(\{z_{k}^{*}\}\), is given in terms of the Poisson brackets by
\[i\mu\frac{d\mathcal{A}}{dt}=\{\mathcal{A},\mathcal{H}\}. \tag{5}\]
which follows from (3).
As the two equations of motion for \(z_{k}\) and \(z_{k}^{*}\) are the complex conjugate of each other, we may consider them to be just one equation in complex variables. Thus we are representing the motion of a classical system as a trajectory in a vector space with \(n\) dimensions with complex components \(z_{k}\), which defines a Hilbert vector space.
We assume the Hamiltonian \(\mathcal{H}\) to be a bilinear function in the complex variables,
\[\mathcal{H}=\sum_{jk}H_{jk}z_{j}^{*}z_{k}, \tag{6}\]
where \(H_{jk}\) are understood as the elements of a matrix \(H\), which is Hermitian because \(\mathcal{H}\) is real. The norm \(\mathcal{N}\) of a state \(\{z_{k}\}\) is defined by
\[\mathcal{N}=\sum_{j}z_{j}^{*}z_{j}, \tag{7}\]
and we see that it is a constant of the motion since it commutes in the Poisson sense with the Hamiltonian, \(\{\mathcal{N},\mathcal{H}\}=0\). Therefore we may set \(\mathcal{N}\) equal to a constant which we choose to be \(1\).
If we replace the expression of \(\mathcal{H}\) given by the equation (6) into the equation of motion (3) we reach the equation
\[i\mu\frac{dz_{j}}{dt}=\sum_{k}H_{jk}z_{k}, \tag{8}\]
The variables \(z_{k}\) are understood as the components of a state vector \(\psi\) of the Hilbert, that is,
\[\psi=\sum_{j}z_{j}\phi_{j}. \tag{9}\]
where the vectors \(\{\phi_{j}\}\) form a complete basis of the Hilbert space. Defining the operator \(\hat{H}\) by
\[\hat{H}\phi_{k}=\sum_{j}H_{jk}\phi_{j}, \tag{10}\]
the equation (8) acquires the form
\[i\mu\frac{d}{dt}\psi=\hat{H}\psi, \tag{11}\]
which is the Schrodinger equation if we set the constant \(\mu\) equal to the Planck constant,
\[\mu=\hbar. \tag{12}\]
In accordance with the postulates of quantum mechanics, the possible outcomes of an observable \(\mathscr{A}\) are the eigenvalues of a matrix \(A\). If the system is in a state \(\psi\), given by (9), the quantum average of this observable is given by
\[\mathcal{A}=\sum_{jk}A_{jk}z_{j}^{*}z_{k}, \tag{13}\]
where \(A_{jk}\) are the elements of a matrix \(A\) whose eigenvalues are the possible outcomes of the observable \(\mathscr{A}\). We interpret \(\mathcal{A}\) as a state function related to the underlying classical system.
In the following we change the equations of motion for the purpose of treating the dynamic variables as stochastic variables, which we now denote by \(x_{k}\). This is accomplished by adding a white noise to the equation (2). We choose a noise that changes the phase \(\theta_{j}\) of \(x_{j}=r_{j}e^{i\theta_{j}}\) but not its absolute value \(r_{j}\). This is accomplished by writing equation (2) in the polar form
\[\frac{d\theta_{j}}{dt}=\frac{1}{2\mu r_{j}}\frac{\partial\mathcal{H}}{\partial r _{j}}+\zeta_{j}, \tag{14}\]
\[\frac{dr_{j}}{dt}=-\frac{1}{2\mu r_{j}}\frac{\partial\mathcal{H}}{\partial \theta_{j}}, \tag{15}\]
where \(\zeta_{j}\) is a stochastic variables with zero mean, and the Hamiltonian function is given by
\[\mathcal{H}=\sum_{jk}H_{jk}x_{j}^{*}x_{k}. \tag{16}\]
From the stochastic equation one can obtain the equation that governs the time evolution of the probability distribution [16; 17]. In the present case the probability distribution, which we denote by \(\mathscr{P}(x,x^{*})\), is defined on the Hilbert space. The probability distribution obeys the Fokker-Planck equation
\[\frac{\partial\mathscr{P}}{\partial t}=\frac{1}{i\mu}\{\mathscr{P},\mathcal{H }\}+\frac{1}{2}\sum_{jk}\gamma_{jk}\frac{\partial^{2}\mathscr{P}}{\partial \theta_{j}\partial\theta_{k}}, \tag{17}\]
where \(\gamma_{jk}=\gamma_{kj}\geq 0\).
As the noise \(\zeta_{i}\) does not change \(x_{j}^{*}x_{j}\), it will not change the norm
\[\mathcal{N}=\sum_{j}x_{j}^{*}x_{j}, \tag{18}\]
and taking into account that \(\{{\cal N},{\cal H}\}=0\), we conclude that \({\cal N}\) is strictly constant along a trajectory in the Hilbert space, despite the fact that the trajectory is stochastic. This result allows us to choose the norm to be equal to 1. We will also choose the constants \(\gamma_{jk}\) to be all equal so that the noise will not change the phase of \(x_{j}x_{k}^{*}\).
The solution of the Fokker-Planck equation (17) is a multivariate Gaussian distribution in the variables \(x_{j}\) and \(x_{k}^{*}\). Therefore, to construct \({\mathscr{P}}(x,x^{*})\), it suffices to determined the averages \(\langle x_{j}\rangle\) and the covariances \(\rho_{jk}=\langle x_{j}x_{k}^{*}\rangle\). From equation (17) we reach the equations
\[\frac{d}{dt}\langle x_{j}\rangle=\frac{1}{i\mu}\sum_{k}H_{jk}\langle x_{k} \rangle-\frac{\gamma}{2}\langle x_{j}\rangle, \tag{19}\]
\[i\mu\frac{d}{dt}\rho_{jk}=\sum_{\ell}(H_{j\ell}\rho_{\ell k}-\rho_{j\ell}H_{ \ell k}), \tag{20}\]
and we remark that there is no term corresponding to the noise in the last equation due to our choice of the same value of \(\gamma_{jk}=\gamma\). Taking into account that the norm (18) equals the unity then
\[\sum_{j}\rho_{jj}=1. \tag{21}\]
Defining \(\rho\) as the matrix with elements \(\rho_{jk}\), the last equation gives \({\rm Tr}\rho=1\), and the equation (20) acquires the form
\[i\mu\frac{d\rho}{dt}=[H,\rho]. \tag{22}\]
which is the quantum Liouville equation.
Two cases should be considered concerning the covariances \(\rho_{jk}\) at. If at the initial time, \(\rho_{jk}=z_{j}^{*}z_{k}\), this form will be preserved at all times and \(z_{j}\) is given by the equation
\[i\mu\frac{dz_{k}}{dt}=\sum_{j}z_{j}H_{jk}, \tag{23}\]
which is identified with equation (8) and thus equivalent to the Schrodinger equation. In this case \({\rm Tr}\rho^{2}=({\rm Tr}\rho)^{2}=1\), which corresponds to the quantum mechanics of pure states. It should be pointed out that (23) is a consequence of the quantum Liouville equation (22). In other words, \(\rho_{jk}=z_{j}^{*}z_{k}\) solves the equation (22) as long as \(z_{j}\) satisfies the equation (23). We remark in addition that \(z_{j}\) is not the average \(\langle x_{j}\rangle\). In fact \(\langle x_{j}\rangle\) vanishes for long times whereas \(z_{j}\) does not in general because
\[\sum_{j}z_{j}^{*}z_{j}=1, \tag{24}\]
which follows from (21). If \({\rm Tr}\rho^{2}<1\), then it is not possible to write \(\rho_{jk}\) as a product \(z_{j}^{*}z_{k}\), and this corresponds to the quantum mechanics of mixed states.
The average \(\bar{A}\) of a state function
\[{\cal A}=\sum_{jk}A_{jk}x_{j}^{*}x_{k} \tag{25}\]
is given by
\[\bar{A}=\sum_{jk}A_{jk}\rho_{kj}.={\rm Tr}A\rho \tag{26}\]
In the case of pure state, \(\rho=zz^{\dagger}\), where \(z\) is a column matrix with elements \(z_{j}\) and \(z^{\dagger}\) is the row matrix with elements \(z_{j}^{*}\), and \(\bar{A}\) is reduced to the usual quantum average \(\bar{A}=z^{\dagger}Az\).
We have considered above a noise that could change the phase of the variable \(x_{k}\) but not its absolute value. This lead us to the the quantum Liouville equation and to the Schrodinger equation. We consider now a more generic noise that allows us to reach the Lindblad equation that describes open quantum systems [18; 19].
We add a white noise to equation (8) which now reads
\[\frac{dx_{j}}{dt}=f_{j}+\zeta_{j}, \tag{27}\]
where
\[f_{j}=\frac{1}{i\mu}\sum_{k}H_{jk}x_{k}, \tag{28}\]
and \(\zeta_{j}\) is a stochastic variables with zero mean that we choose to be linear in the variables \(x_{j}\). The noise should be chosen to conserve in the strict sense the norm (18) along a stochastic trajectory. However, we relax this condition and requires that it is conserved on the average.
A precise meaning of the stochastic equation (27) is provided by writing it in a discrete time version [20], which is
\[\Delta x_{j}=\tau f_{j}+i\sqrt{\tau}\sum_{k}G_{jk}x_{k}-\frac{\tau}{2}\sum_{k} K_{jk}x_{k} \tag{29}\]
where \(\tau\) is the time interval and \(\Delta x_{j}\) is the corresponding increment in the dynamical variable \(x_{j}\). The quantities \(G_{jk}\) and \(K_{jk}\) are random variables to be found in such a way that the norm (18) is preserved.
Let us determine the increment in \(x_{j}x_{k}^{*}\) during an interval of time \(\tau\),
\[\Delta(x_{j}x_{k}^{*})=x_{j}\Delta x_{k}^{*}+x_{k}^{*}\Delta x_{j}+\Delta x_{j} \Delta x_{k}^{*}. \tag{30}\]
Using (29), we find up to terms of order \(\tau\)
\[\Delta(x_{j}x_{k}^{*})=x_{j}\tau f_{k}^{*}+x_{k}^{*}\tau f_{j}\]
\[+i\sqrt{\tau}\sum_{n}(G_{jn}x_{n}x_{k}^{*}-G_{kn}^{*}x_{j}x_{n}^{*})+\tau\sum _{n\ell}G_{kn}^{*}G_{j\ell}x_{\ell}x_{n}^{*}\]
\[-\frac{\tau}{2}\sum_{n}(K_{kn}^{*}x_{j}x_{n}^{*}+K_{jn}x_{n}x_{k}^{*}). \tag{31}\]
If we let \(k=j\) in this equation and sum in \(j\), we find the increment in the norm (18), which is
\[\Delta{\cal N}=i\sqrt{\tau}\sum_{jn}(G_{nj}-G^{*}_{jn})x_{j}x_{n}^{*}\]
\[+\frac{\tau}{2}\sum_{n\ell}(2\sum_{j}G^{*}_{jn}G_{j\ell}-K^{*}_{\ell n}-K_{n \ell})x_{\ell}x_{n}^{*}. \tag{32}\]
We choose \(K_{jk}\) so that the second summation vanishes, that is,
\[K_{jk}=\sum_{\ell}G^{*}_{\ell j}G_{\ell k}. \tag{33}\]
Next we choose \(G_{jk}=g_{jk}\xi_{jk}\), where \(\xi_{jk}\) are real stochastic variables with zero mean and covariances \(\langle\xi_{jk}\xi_{\ell n}\rangle=1\). If we require \(\Delta{\cal N}\) to vanish in the strict sense, that is, in any stochastic trajectory then \(g_{jk}\) should equal \(g^{*}_{kj}\), resulting in the vanishing of the first summation of (31). However, we require \(\Delta{\cal N}\) to vanish in the average so that no restriction in \(g_{jk}\) is needed as the first summation of (31) will vanish in the average.
Taking the average of both sides of equation (31), the terms proportional to \(\sqrt{\tau}\) vanish, resulting in the following expression for the time evolution of \(\rho_{jk}=\langle x_{j}x_{k}^{*}\rangle\),
\[\frac{d\rho_{jk}}{dt}=\frac{1}{i\mu}\sum_{\ell}(H_{j\ell}\rho_{\ell k}-\rho_{j \ell}H_{\ell k})+\sum_{n\ell}g_{j\ell}\rho_{\ell n}g^{*}_{kn}\]
\[-\frac{1}{2}\sum_{n\ell}(\rho_{jn}g^{*}_{\ell n}g_{\ell k}+g^{*}_{\ell j}g_{ \ell n}\rho_{nk}). \tag{34}\]
Denoting by \(g\) the matrix with elements \(g_{jk}\), this equation can be written in the form
\[\frac{d\rho}{dt}=\frac{1}{i\mu}[H,\rho]+\frac{1}{2}(2g\rho g^{\dagger}-\rho g ^{\dagger}g-g^{\dagger}g\rho) \tag{35}\]
which is the Lindblad equation for open quantum systems [18; 19].
We summarize our findings as follows. The dynamics of a quantum system was shown to be represented by an underlying classical system, which turns out to be a collection of interacting classical harmonic oscillators. The coordinate and momentum of the classical particles are understood as the real an imaginary parts of the wave function. The probabilistic character of quantum mechanics is introduced explicitly by treating the wave function as a time dependent random variable by adding a white noise to the Hamilton equations of motion that preserves the norm of the wave function. The Schrodinger equation and the quantum Liouville equations are obtained when the noise changes the phase but not the absolute value of the wave function.
The present representation obviously does not transform the science of quantum mechanics into the science of classical mechanics. The underlying classical system is not an observable as much as the wave function is not. However, the present representation allows an interpretation of quantum mechanics other than the standard interpretation [21; 22; 23]. As the trajectory in the Hilbert space is stochastic this representation fits the consistent history interpretation of quantum mechanics [8] if we bear in mind that each possible trajectory is a possible history.
|
2309.11375 | Performance update of an event-type based analysis for the Cherenkov
Telescope Array | The Cherenkov Telescope Array (CTA) will be the next-generation observatory
in the field of very-high-energy (20 GeV to 300 TeV) gamma-ray astroparticle
physics. The traditional approach to data analysis in this field is to apply
quality cuts, optimized using Monte Carlo simulations, on the data acquired to
maximize sensitivity. Subsequent steps of the analysis typically use the
surviving events to calculate one set of instrument response functions (IRFs)
to physically interpret the results. However, an alternative approach is the
use of event types, as implemented in experiments such as the Fermi-LAT. This
approach divides events into sub-samples based on their reconstruction quality,
and a set of IRFs is calculated for each sub-sample. The sub-samples are then
combined in a joint analysis, treating them as independent observations. In
previous works we demonstrated that event types, classified using Machine
Learning methods according to their expected angular reconstruction quality,
have the potential to significantly improve the CTA angular and energy
resolution of a point-like source analysis. Now, we validated the production of
event-type wise full-enclosure IRFs, ready to be used with science tools (such
as Gammapy and ctools). We will report on the impact of using such an
event-type classification on CTA high-level performance, compared to the
traditional procedure. | Juan Bernete, Orel Gueta, Tarek Hassan, Max Linhoff, Gernot Maier, Atreyee Sinha | 2023-09-20T14:56:39Z | http://arxiv.org/abs/2309.11375v1 | # Performance update of an event-type based analysis for the Cherenkov Telescope Array
###### Abstract:
The Cherenkov Telescope Array (CTA) will be the next-generation observatory in the field of very-high-energy (20 GeV to 300 TeV) gamma-ray astroparticle physics. The traditional approach to data analysis in this field is to apply quality cuts, optimized using Monte Carlo simulations, on the data acquired to maximize sensitivity. Subsequent steps of the analysis typically use the surviving events to calculate one set of instrument response functions (IRFs) to physically interpret the results. However, an alternative approach is the use of event types, as implemented in experiments such as the _Fermi_-LAT. This approach divides events into sub-samples based on their reconstruction quality, and a set of IRFs is calculated for each sub-sample. The sub-samples are then combined in a joint analysis, treating them as independent observations. In previous works we demonstrated that event types, classified using Machine Learning methods according to their expected angular reconstruction quality, have the potential to significantly improve the CTA angular and energy resolution of a point-like source analysis. Now, we validated the production of event-type wise full-enclosure IRFs, ready to be used with science tools (such as _Gammapy_ and _ctools_). We will report on the impact of using such an event-type classification on CTA high-level performance, compared to the traditional procedure.
Introduction
The Cherenkov Telescope Array (CTA)1 represents the next-generation observatory in the field of very-high-energy gamma-ray astroparticle physics. It employs two arrays of imaging atmospheric Cherenkov telescopes (IACTs), one for each hemisphere, composed of telescopes of three different sizes. Its optimized configuration provides a major improvement in sensitivity and in angular and energy resolution with respect to the current generation of IACTs over a very broad energy range from 20 GeV up to more than 300 TeV.
Footnote 1: www.cta-observatory.org
The performance of this future observatory is estimated from detailed Monte Carlo (MC) simulations, described by a set of Instrument Response Functions (IRFs). The main IRF components describing the instrument performance to gamma-ray observations are the effective area, the energy dispersion and point-spread function (PSF). These IRFs are then used by science tools (such as gammapy [6] and ctools [10]) to simulate the instrument performance over specific science cases. The methodology to calculate the expected sensitivity and associated IRFs of CTA, as well as their detailed description, has been described in previous contributions (see [2, 4, 8]) and is briefly discussed in section 3.
The _Fermi_ Large Area Telescope (LAT) Collaboration [3] proved that high-level analysis performance can be significantly improved by separating events for which the response of the detector is different into event types and producing specific IRFs for each event type [5]. By including this extra knowledge into the likelihood analysis, multiple benefits are achieved: reducing background contamination, increasing the effective area and sensitivity as well as significantly improving the angular and energy resolution for a subset of the events. Inspired by the success of event types in _Fermi_-LAT, we present in this work the status of an analog implementation for IACTs, specifically for the future CTA.
This work is a natural continuation of Ref. [9], where we demonstrated that event types are able to improve the angular and energy resolution by up to 25% for a point-like source located at the center of the field of view (FoV). This first step did not allow the generalized use of event-type-wise IRFs at the science tools level to properly evaluate their impact over specific science cases.
In this work, we have validated the production of event-type wise offset-dependent point-like and full-enclosure IRFs for CTA (i.e. valid for both point-like or extended sources located anywhere within the FoV). These IRFs, tailored to each event type, are now ready to be used by science tools. We also present the impact of this event-type classification on the high-level performance of CTA, comparing it to the standard procedure (not using event types), as well as evaluate the potential for further improvement with a better event-type classification.
## 2 Event type partitioning
Previous work successfully demonstrated the effectiveness of machine learning (ML) methods in separating event types based on their expected quality in angular reconstruction [9]. Our approach begins at the Data Level 2 (DL2), as the product of a classical IACT analysis, which classification score called _gammaness_ and a list of lower-level parameters describing individual telescope images
and stereo parameters (such as Hillas parameterization, reconstructed altitude of shower maximum, etc...).
An event type is a unique tag for each event in a DL2 table that classifies of all them in terms of their quality in angular reconstruction. We use a ML regression model to predict the angular difference between true and reconstructed direction (from now on, predicted misdirection), so the division of event types reduces to establishing thresholds for the top X% reconstructed events (lowest predicted misdirection), the following Y%, etc. Where the number of event types and their proportions can be freely decided.
The event type partitioning methodology employed for this study is almost identical to the one described in the previous contribution [9] with the following differences:
* The MC simulated data used is diffuse (covering the full FoV of CTA telescopes) for gammas, protons and electrons.
* The regression model we use, a multilayer perceptron (MLP) neural network with a \(tanh\) as neuron activation function, has been further optimized.
* The thresholds in predicted misdirection to divide the event types are now dependent in both energy and offset angle, instead of only energy.
## 3 IRF production
The standard methodology to compute CTA IRFs [2, 4, 8] starts from DL2 table. A re-weight of the simulated events is needed so that they resemble the particle statistics expected from a CTA observation of a Crab-Nebula-like source (as a test case). To compute IRFs a cut optimization is needed, generally maximizing sensitivity as a function of the reconstructed energy. Events surviving these quality cuts are the ones that will be used to compute the final set of IRFs. The cut optimization is usually performed over the following parameters: multiplicity (number of telescopes used in the reconstruction of an event), _gammaness_ and, in the case of a point-like source analysis, the angular size of the signal region (_ON region_). Once CTA data is produced, the list of events surviving the _gammaness_ and multiplicity cuts together with their corresponding IRFs form the Data Level 3 (DL3) products.
With this procedure, the amount of data surviving quality cuts (and therefore actually used in the analysis) is small compared to the rejected data, while the latter could still be useful. Furthermore, as there is only one set of IRFs generated applied equally for all events, all the extra knowledge we have from the low-level analysis is lost.
In an event-type based analysis, the event type partitioning (as explained in Section 2) occurs before optimizing the cuts and computing the IRFs. This allows to create a number of independent lists (as many as event types), each one with their corresponding set of IRFs describing their average quality.
To compute the IRFs and store them in the proper format2 we used the library _pyitf3_. This library first needed to be tested and validated to produce offset-dependent and full-enclosure IRFs.
To validate it, we compared the resulting sensitivity and IRFs to the ones computed by _EventDisplay_[11] with the same MC data. The tests consisted in two steps:
1. Validate pyirf IRF computation. By using identical DL2 tables, we compared the computed IRF components by using exactly the same quality cuts as _EventDisplay_. The results were identical, and therefore the computation of all IRF components was validated.
2. Validate pyirf cut optimization. We performed two independent cut optimizations (with _pyirf_ and _EventDisplay_) by selecting the cuts that provide a better sensitivity in each energy bin, and compared resulting sensitivities. As shown in Fig. 1, they are not exactly the same but they agree to within 50% between 30 GeV and 100 TeV (also across different values of the FoV). The reason of the disagreement is not known, but is probably related to small differences in the cut selection methods (for example, _EventDisplay_ uses smaller bins for the direction cuts).
After performing these tests, we conclude _pyirlf_ is suitable to our needs, as it allows us to compute both point-like and full-enclosure IRFs properly for all different camera offset angles up to 6 \(deg\). Once the production of IRFs was validated, we produced various sets of event-type-wise IRFs, ready to be used with high-level science tools, in this case, _Gammapy_.
## 4 Results
We evaluate the expected angular reconstruction quality of all events, rank them and eventually classify them into different event-type partitionings to then produce event-type-wise offset
Figure 1: _EventDisplay_ and _pyirlf_ comparison of the resulting sensitivity for a Crab-like observation of 50 hours in the central FoV offset bin.
dependent IRFs for 50 hours of observing time for the "Alpha" layout of CTA-North (4 LSTs and 9 MSTs) [7].
By computing the angular resolution for the ranked top 20% events as reconstructed by our model, we show a 25 to 50 % improvement in angular resolution with respect to the standard cut optimization method (not using event types), as shown in Figure 2. We also computed the angular resolution for the true top 20% events, i.e.: ranking by the actual difference between the reconstructed and the true simulated position of each event, so we can see there is still room for improvement of our regression model.
We can use these IRFs to perform either 1D (spectral evaluations of point-like sources) or 3D (spectral and morphological studies) simulations with _Gammapy_. Datasets are simulated from a set of IRFs: we are able to perform simulations for a single IRFs set and for event-type-wise IRFs treating them as independent samples that may be combined in a joint-likelihood analysis. By doing this with a Crab-like source simulations over a wide range of fluxes, we can reconstruct the combined sensitivity from all event types as shown in Figure 3, by identifying over each bin in reconstructed energy the simulated flux that provides a 5\(\sigma\) detection. Note this method to compute sensitivity (for any set of observations or simulations at _Gammapy_ level) does not have the usual requirements generally included in the calculation of sensitivity, such as the requirement of the excess being larger than a 5% of the background (to account for systematics in the background) or the minimum number of 10 excess events (heavily affecting the sensitivity at the highest energies), which is the main reason of the disagreements at the lowest and highest energies with the _pyif_-estimated curve.
## 5 Conclusions
The conclusions of this work can be summarized by the following milestones:
1. Our ML regression model is able to predict the misdirection of each event and, therefore, can be used to separate event types. It should be noted there is still room for improvement.
Figure 2: Angular resolution for a 50 hours observation, comparison between the standard cuts case, the reconstructed top 20% events and the true top 20%. Repeated for different offset ranges.
2. Offset-dependence has been introduced and validated in the event-type partitioning process.
3. We are now able to produce consistently both point-like and full-enclosure event-type-wise IRFs over the full FoV, which allows high-level simulations with science tools such as _Gammapy_.
4. Event-type-wise IRFs show a significant improvement in angular resolution (25 to 50% over a subset of the events).
5. Preliminary _Gammapy_ analysis already shows that is possible to combine observations from different event-type samples for a better performance.
This work shows the great potential that an event-type based analysis could have for improving CTA's performance. A specific science case for fundamental physics with gamma-ray propagation [1] that could be benefited by event types is measuring intergalactic magnetic fields, in which the size of the PSF is crucial. Another important example is the Galactic Plane Survey, where the improved angular resolution at large offset angles will allow to separate sources and determine extensions and morphologies better than ever in this energy range.
Figure 3: Preliminary sensitivity curve reconstructed with _Gammapy_ by doing a likelihood analysis with combined event types (4 types with 25% of the events each) and with no event types, compared to the standard sensitivity computed with _pyiff_. Note that Gammapy-estimated sensitivity does not take into account any conditions on background systematics and minimum number of excess events, which affect the highest and the lowest energies.
## Acknowledgements
This work was conducted in the context of the CTA Consortium and CTA Observatory. We gratefully acknowledge financial support from the agencies and organizations listed here: [http://www.cta-observatory.org/consortium_acknowledgments](http://www.cta-observatory.org/consortium_acknowledgments).
|
2309.15113 | Stable Bosonic Topological Edge Modes in the Presence of Many-Body
Interactions | Many magnetic materials are predicted to exhibit bosonic topological edge
modes in their excitation spectra, because of the nontrivial topology of their
magnon, triplon or other quasi-particle band structures. However, there is a
discrepancy between theory prediction and experimental observation, which
suggests some underlying mechanism that intrinsically suppresses the expected
experimental signatures, like the thermal Hall current. Many-body interactions
that are not accounted for in the non-interacting quasi-particle picture are
most often identified as the reason for the absence of the topological edge
modes. Here we report stable bosonic edge modes at the boundaries of a ladder
quantum paramagnet with gapped triplon excitations in the presence of the full
many-body interaction. For the first time, we use tensor network methods to
resolve topological edge modes in the time-dependent spin-spin correlations and
the dynamical structure factor, which is directly accessible experimentally. We
further show that these edge modes have anomalously long time coherence,
discuss the topological phase diagram of the model, demonstrate the
fractionalization of its low-lying excitations, and propose potential material
candidates. | Niclas Heinsdorf, Darshan G. Joshi, Hosho Katsura, Andreas P. Schnyder | 2023-09-26T17:59:04Z | http://arxiv.org/abs/2309.15113v1 | # Stable Bosonic Topological Edge Modes in the Presence of Many-Body Interactions
###### Abstract
Many magnetic materials are predicted to exhibit bosonic topological edge modes in their excitation spectra, because of the nontrivial topology of their magnon, triplon or other quasi-particle band structures. However, there is a discrepancy between theory prediction and experimental observation, which suggests some underlying mechanism that intrinsically suppresses the expected experimental signatures, like the thermal Hall current. Many-body interactions that are not accounted for in the non-interacting quasi-particle picture are most often identified as the reason for the absence of the topological edge modes. Here we report stable bosonic edge modes at the boundaries of a ladder quantum paramagnet with gapped triplon excitations in the presence of the full many-body interaction. For the first time, we use tensor network methods to resolve topological edge modes in the time-dependent spin-spin correlations and the dynamical structure factor, which is directly accessible experimentally. We further show that these edge modes have anomalously long time coherence, discuss the topological phase diagram of the model, demonstrate the fractionalization of its low-lying excitations, and propose potential material candidates.
The enormous success of the description of electronic topological phases of matter quickly inspired research that lead to the generalization of the theory to bosonic quasi-particles like photons [1; 2; 3; 4; 5; 6], phonons [7; 8; 9; 10], magnons [11; 12; 13; 14; 15; 16; 17; 18; 19] or triplons [20; 21; 22]. Analogous to the Quantum or Spin Hall effect for electrons, the nontrivial topology of magnetic excitations contributes to unusual responses like the Thermal Hall or Spin Nernst effect in the form of bosonic edge currents.
These topologically protected edge spin waves are a key ingredient for many future spintronic applications, which are a promising alternative to conventional computing, but with a minimal ecological footprint[23]. There is a multitude of blueprints that make use of their properties to build next-generation devices like spin-wave diodes, beam splitters, interferometers and others [23; 24; 25; 26].
Moreover, systems that are useful for storing and processing quantum information are highly sought-after. Typically, the quantum information of a many-body state is destroyed rapidly. Overcoming fast decoherence of quantum states is one of the main challenges of quantum computing, and has driven extensive theoretical interest in finding ways to increase their coherence times using many-body localization [27; 28; 29; 30; 31], prethermalization [32; 33; 34], quantum many-body scarring [35; 36; 37] or the states' topological properties[38; 39; 40; 41]. The evolution of topological states over time - specifically in the presence of defects or nonlinearities[42; 43; 44] - as well as imaging and detection techniques[45; 46; 47] are thriving fields of research and crucial for bringing real-world applications on their way.
Even though a significant amount of material candidates has been proposed to host nontrivial excitations [11; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154], their detection remains elusive. While some experiments indeed show signatures of the predicted bosonic edge modes[55; 56; 57; 58; 59; 60], others have not been able to produce evidence of topological quasiparticles [61; 62], and for most of the proposed candidates a clear indication of their presence is yet to be discovered.
Figure 1: (a) Schematic of the spin model on a ladder geometry. On each rung of the ladder, two spin-1/2 sites are (strongly) coupled through antiferromagnetic Heisenberg interaction. Along the side rails, the spins interact (weakly) antiferromagnetically. In this limit the spins form singlet pairs along the rungs of the ladder. (b) The same model but with anti-symmetric (DM) and symmetric (pseudo-dipolar) exchange in the \(y\)-direction given by \(D_{y}\) and \(\Gamma_{y}\). These terms introduce a winding and open a topological gap in the excitation spectrum.
Many reasons that could make the effects of the topological edge modes fail to materialize have been theorized. Among other things, they include thermal dampening, magnon-phonon coupling and domain formation, but what is most often identified as the source of the suppression of thermal Hall effect are many-body effects. In contrast to their fermionic counterparts, even small many-body interactions [61, 63, 64] or exchange anisotropies that are typically present in realistic models [65] seem to substantially affect bosonic topological transport properties.
In this work we report bosonic topological edge modes that are stable in the presence of the full many-body interaction using Density Matrix Renormalization Group (DMRG) and time-evolution methods.
_Model and Harmonic Approximation._--We consider a quantum \(S=1/2\) Heisenberg model on a ladder with spin-orbit interaction and an external magnetic field as shown schematically in Fig. 1. This model, first considered in Ref. [21] is given by the following Hamiltonian:
\[\hat{H} =\hat{H}_{\text{rung}}+\hat{H}_{\text{rail}}+\hat{H}_{\text{SOC} }+\hat{H}_{Z}, \tag{1}\] \[\hat{H}_{\text{rung}} =J\sum_{i}\hat{\mathbf{S}}_{li}\cdot\hat{\mathbf{S}}_{ri},\] (2) \[\hat{H}_{\text{rail}} =K\sum_{i}\left(\hat{\mathbf{S}}_{li}\cdot\hat{\mathbf{S}}_{li+1}+\hat{ \mathbf{S}}_{ri}\cdot\hat{\mathbf{S}}_{ri+1}\right),\] (3) \[\hat{H}_{\text{SOC}} =D_{y}\sum_{\alpha=l,r}\sum_{i}\left(\hat{S}_{\alpha i}^{x}\hat{ S}_{\alpha i+1}^{x}-\hat{S}_{\alpha i}^{x}\hat{S}_{\alpha i+1}^{z}\right),\] \[+\Gamma_{y}\sum_{\alpha=l,r}\sum_{i}\left(\hat{S}_{\alpha i}^{z} \hat{S}_{\alpha i+1}^{x}+\hat{S}_{\alpha i}^{x}\hat{S}_{\alpha i+1}^{z}\right),\] (4) \[\hat{H}_{Z} =h_{y}\sum_{\alpha=l,r}\sum_{i}\left(\hat{S}_{\alpha i}^{y}+\hat{ S}_{\alpha i}^{y}\right), \tag{5}\]
where \(\hat{\mathbf{S}}_{\alpha i}=\left(\hat{S}_{\alpha i}^{x},\hat{S}_{\alpha i}^{y}, \hat{S}_{\alpha i}^{z}\right)^{\intercal}\) are the usual spin operators with \(\alpha=l,r\) corresponding to the left and right rail of the ladder and \(i\) running over all rungs at positions \(r_{i}\).
In the limit of strong anti-ferromagnetic rung coupling, this model is the prototypical example of a gapped quantum paramagnet [66, 67]. In that limit the ground state of the model is close to a product state of spin singlets
\[|\psi_{0}\rangle\sim\bigotimes_{i=1}^{L}(|\uparrow\downarrow\rangle-|\downarrow \uparrow\rangle)/\sqrt{2}, \tag{6}\]
with low-lying triplet excitations, the corresponding quasiparticles being triplons. In the presence of SU(2) symmetry (\(D_{y}=\Gamma_{y}=h_{y}=0\)), the triplons are degenerate, but can be split by applying an external magnetic field due to their different magnetic quantum number. Alternatively, the degeneracy can be lifted through spin-orbit interactions. Its anti-symmetric part - similar to the magnetic field term - splits the triplon modes while retaining U(1) symmetry. The pseudo-dipolar, or \(\Gamma\)-term, which is the symmetric part of the spin-orbit interaction breaks U(1)-symmetry.
In the paramagnetic phase of the model, an effective low-energy spectrum can be calculated by first rewriting the model in terms of triplons through bond-operator theory[68], and then discarding all resulting terms that are not bilinear, which has been shown to be a controlled approximation [69, 70]. Within this harmonic approximation, it was shown in Ref. [21] that the so-obtained excitation spectrum can be characterized by a topological winding number that classifies the topology of the magnet's bulk and enforces - if nontrivial - triplon modes at the ends of the ladder with fractional particle number.
In contrast to the case of topological magnons of ordered magnets [71, 72, 73, 74, 59] - where there is a gapless Goldstone mode - the triplon excitations in the quantum paramagnet are gapped. As a consequence of this bulk gap the triplons, including the topological edge modes, do not have any low-energy states to decay into. Therefore their lifetime is not reduced significantly even in the presence of interactions. By applying an external magnetic field or through spin-anisotropy exchanges, magnon systems can become gapless too. However, a bulk gap induced by these effects is typically much smaller than the rung
Figure 2: Real part of the spin-spin correlation function at time \(t=50/J\) in the (a) trivial and (b) the topological case with \(K=0.01J\) and field strengths \(h_{y}/D_{y}=0.5\) and \(h_{y}/D_{y}=1.5\) respectively on a ladder with \(L=12\) rungs. The topological auto-correlation at this time step has peaks at the boundaries, which are absent in the trivial case. (c) Time dependence of the boundary auto-correlation Re \(C_{11}^{Li}\) for \(K=0.01J\), \(D_{y}=\Gamma_{y}=0.1J\) and no magnetic field on a ladder with \(L=8\) rungs for different boundary conditions. For PBC (no -gap mode), it decays rapidly whereas for OBC (in-gap mode) it does not decay. (d) The decay envelopes of (c) for different values of magnetic field. For finite field strengths the correlator decays also for OBC, but more slowly than for PBC.
coupling \(J\), which is the relevant spin exchange energy scale that determines the size of the gap in the ladder system.
The spin model presented herein is suitable to describe the magnetism of potentially many materials and belongs to the well studied class of two-leg ladder compounds. Examples of spin ladder materials are BiCu\({}_{2}\)PO\({}_{6}\)[76], NaV\({}_{2}\)O\({}_{5}\)[77; 78] or multiple cuprate ladders[79; 80; 81; 82]. The recipe for finding topological edge modes in these systems requires mainly two ingredients: (i) strong antiferromagnetical rung coupling (stronger than along the rails), and (ii) spin-orbit interaction. Because the Heisenberg couplings along the rungs and rails depend on the overlap of the localized electrons, their ratio can easily be tuned using e.g. strain. The same is true for the relative spin-orbit interaction strength[83; 84; 85; 86], however, the \(D\) and \(\Gamma\) term are additionally restricted by symmetries. These symmetries can be broken, if not already broken internally, by doping, gate voltages or structural manipulations like skewing and stretching. The abundance of experimental handles suggests that a topological phase transition might easily be switchable, which again expands the potential applications of these systems in future devices.
_Tensor Networks and Dynamical Response._--To relate our model directly to an experimentally measurable quantity, we compute the dynamic structure factor (DSF), which is most directly accessed through inelastic neutron scattering[87; 88] or inverse spin Hall noise spectroscopy (ISHNS)[89]. Because we are investigating the effect of interactions, and since our model is (quasi) one-dimensional, we use the DRMG algorithm to obtain the exact many-body ground state of the finite spin ladder[90].
In contrast to electronic systems, the band topology of bosonic quasiparticles is not a ground state property, so in order to access the low-lying excitations, we apply time-evolution methods[91; 92; 93; 94; 95; 96] to compute the odd-part of the time-dependent spin-spin correlation
\[C^{\gamma\gamma^{\prime}}_{ij}(t)=\langle\psi_{0}|\hat{\hat{S}}^{\gamma}_{i} \hat{U}(t)\hat{\hat{S}}^{\gamma^{\prime}}_{j}|\psi_{0}\rangle, \tag{7}\]
with \(\hat{\hat{S}}^{\gamma}_{i}=\hat{S}^{\gamma}_{li}-\hat{S}^{\gamma}_{ri}\) and the unitary time-evolution operator \(\hat{U}(t)\). Already the real-space and -time correlation function shows signatures of edge modes in the topologically nontrivial phase region (\(h_{y}<|D|\)). In Fig. 2 (a) and (b) the real part of \(C^{\gamma\gamma^{\prime}}_{ij}(t)\) for the topological and trivial case at a fixed time step are plotted. The former shows strong correlations that are pinned to the edges of the system, whereas they are absent in the latter.
In Fig. 2 (c), we plot the time dependence of Re \(C^{xx}_{11}\) at zero magnetic field for periodic (PBC) and open boundary conditions (OBC). For PBC there is no topologically protected edge mode at \(i=j=1\), and the excitation decays over time rapidly. In the open system the edge mode has strongly enhanced time coherence, and we cannot resolve any decay during the time it takes the excitation to hit the boundary at the other end of the ladder. Fig. 2 (d) shows the decay envelope of the excitation for increasing magnetic field for periodic (dashed line) and open (solid line) boundaries. For finite magnetic fields the boundary correlation starts to decay, and for stronger fields the difference in lifetime for PBC and OBC - as indicated by the shaded area - becomes less pronounced.
The DSF, which encodes the dynamical susceptibility of the paramagnet is defined as the Fourier transform of the spin-spin correlations in space and time
\[S^{\gamma\gamma^{\prime}}_{k}(\omega)=\frac{1}{2\pi L}\int_{- \infty}^{\infty}dt\ e^{i(\omega-\omega_{0})t}\sum_{i,j}e^{i(r_{j}-r_{i})k}C^{ \gamma\gamma^{\prime}}_{ij}(t). \tag{8}\]
It is a positive and real quantity and can be calculated using DMRG and time-evolution methods. Usually, the system size and parameters of the time-evolution are chosen to minimize finite size effects, and the state is evolved only for as long as the excitation does not reach the boundary of the system. In addition, by imposing translational invariance, one of the spatial degrees of freedom of \(C^{\gamma\gamma^{\prime}}_{ij}(t)\) can be fixed [97; 98; 99; 100]. Because we are explicitly studying the excitations in the presence of a boundary, we require the "full" real-space correlation function, as well as a "long-enough" time-evolution. \(S^{\gamma\gamma^{\prime}}_{k}(\omega)\) obeys sum rules[99] that we track to diagnose the severity of
Figure 3: \(S_{k}(\omega)\) for the (a) topological and (b) trivial quantum paramagnet with \(h_{y}/D_{y}=1/2\) and \(h_{y}/D_{y}=3/2\) respectively on a ladder with \(L=12\) rungs. The dashed line shows the spectrum of the effective low-energy models from Ref. [21]. (c) \(S_{3\pi/2}(\omega)\) for different values of \(h_{y}/D_{y}\). The first two curves lie in the topological phase region (see phase diagram at the bottom) with the in-gap modes marked by dashed lines. At \(h_{y}/D_{y}=1\) the gap is closed. The last curve lies in the trivial region and is gapped, with no mode in-between.
finite-time and -size effects, which lead to numerical artifacts (see supplementary material[101]).
We study the field-susceptible part of Eq. (8) which is given by \(S_{k}(\omega)=S_{k}^{xx}(\omega)+S_{k}^{zz}(\omega)\) to which we simply refer to as DSF from now.
_Results._--In Fig. 3 (a) and (b) we plot the DSF of the topological and trivial paramagnet along with the triplon bands obtained within the harmonic approximation in Ref. [21]. For small rail and spin-orbit coupling the harmonic approximation is accurate, because the density of triplons is small leading to only minor renormalizations from triplon-triplon interactions. In the topologically nontrivial case (small values of magnetic field) an additional mode appears between the two bulk bands that we hereafter refer to as _in-gap mode_. It is absent in the trivial phase (large values of magnetic field). Fig. 3 (c) shows the topological phase transition at \(k=3\pi/2\) with the external magnetic field \(h_{y}\) as a tuning parameter. At \(h_{y}=|D_{y}|\) the gap closes, and further increasing the magnetic field it reopens, but with no topological in-gap mode. We did finite size scaling with ladders of up to \(L=24\) rungs to confirm that the in-gap mode retains finite spectral weight.
To confirm that the in-gap mode is really localized at the boundaries of the system, we first calculate the local density of states of the lower band
\[\rho_{i}=\sum_{k}e^{-ir_{j}k}\int_{0}^{J+\delta}d\omega S_{k}(\omega), \tag{9}\]
where \(\delta\) is a small enough value such that the spectral weight of any boundary mode, but not that of the upper triplon band is included in the integral. We then define the "topological density of states" as the difference of the local densities of states of the nontrivial paramagnet for open and periodic boundary conditions [21]
\[\rho_{i}^{\rm top}=\rho_{i}^{\rm OBC}-\rho_{i}^{\rm PBC}. \tag{10}\]
This quantity isolates the contribution to the spectrum that stems from introducing a boundary and is shown Fig. 4 (a) for different values of magnetic field. We see two peaks located at the boundary of the ladder, that start to spread into the bulk for finite fields. To compute the associated particle number at one of the edges we integrate \(\rho_{i}^{\rm top}\) for the system's left termination
\[n_{t}=\int_{1}^{L/2}dr\rho_{i}^{\rm top}, \tag{11}\]
and find that the edge mode has fractional particle number. Our numerical result \(n_{t}\sim 0.43\) deviates slightly from the predicted value of \(0.5\)[21] due to spectral leakage caused by the aforementioned finite-size and time effects. However we expect that in the thermodynamic limit this value would indeed tend to \(0.5\). For larger values of field \(n_{t}\) becomes smaller and vanishes at the phase transition at \(h_{y}/D_{y}=1\).
The noninteracting triplon winding number is a topological band property[21], so the edge modes are naively expected to be stable only as long as there is a notion of "bands", or in other words as long as triplons provide a suitable quasi-particle description of the ladder's excitation spectrum. We consider three limits of the model's parameter space: (i) large magnetic field \(h_{y}\), (ii) large rail coupling \(K\) and (iii) large spin-orbit interaction \(D_{y}\) and \(\Gamma_{y}\). In case (i), the field-polarized phase, the lower triplon modes condenses at \(h_{y}\sim J\) and the system becomes ferromagnetic with magnons as its low-lying excitations. The band structures derived from linear spin wave theory are given in the supplemental material[101] and are topologically trivial. Case (ii) is the limit of two weakly coupled anti-ferromagnetic Heisenberg chains. The system does not order, even for strong rail coupling (\(K\sim 2J\)). The antiferromagnetic order parameter is finite also for small values of \(K\), but approaches zero as the system size \(L\) is increased, as confirmed by finite size scaling [101]. Case (iii) is the most relevant benchmark of the edge modes' stability that we can provide with the available handles in the model, because here the higher-order terms of the interacting triplon Hamiltonian become important[21]. In this limit there is significant overlap with the two- and three-triplon continuum that leads to damping and decay, and the quasi-particle description within the harmonic approximation breaks down. In contrast to case (i), increasing of \(D_{y}\) and \(\Gamma_{y}\) does not lead to condensation of the lower mode, and we did not find any evidence for a phase transition up to values of \(D_{y}=\Gamma_{y}=2.6J\). In Fig. 4 (b) we plot the logarithm of the DSF for \(D_{y}=\Gamma_{y}=1.2J\). Even though the upper triplon band is strongly damped and split into many energy levels, we find that the topological edge modes remain stable.
Figure 4: (a) Topological density of states \(\rho_{i}^{\rm top}\) for \(K=0.01J\), \(D_{y}=\Gamma_{y}=0.1J\) and different values of \(h_{y}\) on a ladder with \(L=8\) rungs. The in-gap mode is localized at the two boundaries of the system with fractional particle numbers at each termination. For finite magnetic field the in-gap mode spreads into the bulk and vanishes at the topological phase transition at \(h_{y}/D_{y}=1\). (b) The logarithm of the DSF for \(K=0.01J\), \(h_{y}=0.2\) and strong SOC \(D_{y}=\Gamma_{y}=1.2J\) on a ladder with \(L=12\) rungs. The quasi-particle peaks are dampened and split. The boundary mode is flat and marked by text.
_Conclusion_--We have demonstrated that, contrary to what the absence of experimental evidence for many predicted materials might suggest, topological bosonic edge modes can be stable in the presence of the full many-body interaction. Even though correlations have been identified as the reason for the absence of topological responses in some cases, our work clearly shows that they do not generically suppress bosonic edge modes, and it prompts the question of what other mechanism is responsible. A natural extension of this work is to apply our method to two-dimensional systems wrapped onto a cylinder. Multi-rail ladders[102; 103], a ferromagnet on a Lieb lattice[104] or the Shastry-Sutherland model[105] are obvious candidates, but also topological magnon systems that are potentially more fragile due to their larger decay phase space.
The absence of signatures of topological edge modes even in systems with good agreement between theory and experiment for bulk properties suggests that the responsible decay channel might be a surface effect, i.e., it lies at the boundary. In a quasi two-dimensional model the edge modes are not completely localized, but propagate along the boundary of the system. This setup would further allow the implementation of a defect at the boundary and to investigate the stability of the topological bosons in its presence.
The typical "workflow" for finding topological edge modes usually consists of classifying the topology of a translationally invariant model, and then infering their existence by bulk-boundary correspondence. Even if the symmetry that protects the band topology is broken weakly, the edge states are still expected to be present as long as the the perturbation is smaller than the gap they live in (these approximate symmetries are sometimes called quasi-symmetries[106; 107]). The chiral symmetry that allows for the definition of a winding number in the noninteracting case is broken by the rail coupling \(K\)[21], and is also clearly broken in the limit of strong SOC, nevertheless our numerical study shows that the edge modes persist, which invokes the question of a more suitable bulk classification.
In recent years, considerable effort has been invested in the extension of topological classifications to interacting systems [108; 109; 110; 111; 112; 113]. These schemes account for the many-body nature of the problem using either a matrix product state or Green's function representation, but are not applicable for the classification of bosonic excitation spectra. Detection of phase transitions using entanglement have been brought forward, but are - even though directly related to the dynamical response of a spin system - not sensitive to topological phase transitions[114]. It is essential to extend existing or find new methods that are able to capture the topological properties of e.g. dynamical spin responses, and we hope our work inspires fruitful research in that direction.
###### Acknowledgements.
We thank A. Nocera and S. M. Winter for helpful discussions. NH acknowledges financial support from the Max Planck Institute for Solid State Research in Stuttgart, Germany and the DAAD JSPS summer program 2022. DGJ acknowledges support from the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4007. A.P.S. and N.H. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 360 - 492547816. H.K. was supported by JSPS KAKENHI Grant No. JP23H01093 and MEXT KAKENHI Grant-in-Aid for Transformative Research Areas A "Extreme Universe" (KAKENHI Grant No. JP21H05191).
|
2309.13562 | Keeping in Time: Adding Temporal Context to Sentiment Analysis Models | This paper presents a state-of-the-art solution to the LongEval CLEF 2023 Lab
Task 2: LongEval-Classification. The goal of this task is to improve and
preserve the performance of sentiment analysis models across shorter and longer
time periods. Our framework feeds date-prefixed textual inputs to a pre-trained
language model, where the timestamp is included in the text. We show
date-prefixed samples better conditions model outputs on the temporal context
of the respective texts. Moreover, we further boost performance by performing
self-labeling on unlabeled data to train a student model. We augment the
self-labeling process using a novel augmentation strategy leveraging the
date-prefixed formatting of our samples. We demonstrate concrete performance
gains on the LongEval-Classification evaluation set over non-augmented
self-labeling. Our framework achieves a 2nd place ranking with an overall score
of 0.6923 and reports the best Relative Performance Drop (RPD) of -0.0656 over
the short evaluation set. | Dean Ninalga | 2023-09-24T06:38:21Z | http://arxiv.org/abs/2309.13562v1 | # Keeping in Time: Adding Temporal Context to Sentiment Analysis Models
###### Abstract
This paper presents a state-of-the-art solution to the LongEval CLEF 2023 Lab Task 2: _LongEval-Classification_[1]. The goal of this task is to improve and preserve the performance of sentiment analysis models across shorter and longer time periods. Our framework feeds _date-prefixed_ textual inputs to a pre-trained language model, where the timestamp is included in the text. We show _date-prefixed_ samples better conditions model outputs on the temporal context of the respective texts. Moreover, we further boost performance by performing self-labeling on unlabeled data to train a student model. We augment the self-labeling process using a novel augmentation strategy leveraging the _date-prefixed_ formatting of our samples. We demonstrate concrete performance gains on the LongEval-Classification [1] evaluation set over non-augmented self-labeling. Our framework achieves a 2nd place ranking with an overall score of 0.6923 and reports the best _Relative Performance Drop_ (RPD) [2] of -0.0656 over the short evaluation set (see Alkhalifa et al. [3]).
S +
Footnote †: © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 40 International (CC BY 4.0).
1
Footnote 1: Toronto, Canada
Self-Labeling, Sentiment Analysis, Temporal Misalignment, Date-Prefixing
## 1 Introduction
The application of language models such as BERT [4], RoBERTa [5] and XLM-RoBERTa [6] to textual data is a core component in many natural language processing (NLP) pipelines. However, a notable limitation of most language models is their lack of temporal awareness, as they typically encode text into fixed representations. Conversely, the nature of textual data is inherently dynamic and subject to change over time. Where traditional meanings of words, phrases, and concepts are constantly evolving [7, 8]. Furthermore, significant events can alter the factual basis of the text [9]. Although metadata of well-known text corpora includes timestamps, timestamps are almost never used within many NLP pipelines. A sentiment analysis model trained today could interpret the phrase: "You are just like X" as a positive sentiment. However, an issue can arise once people consider a comparison to 'X' as a non-positive comparison. Subsequently, the model becomes _misaligned_ if this flip in public opinion occurs. Hence, it can be difficult to train models that can generalize to future data without a sense of temporal context and awareness [10].
Mitigating _temporal misalignment_[11] between the facts and general sentiments of the current world and those found in text corpora is an active area of focus in various areas of research in nlp. In particular, work in NER (named-entity-recognition) [12, 9, 13] and question-and-answering [14, 15, 12, 16] often directly address temporal misalignment as they are considered _knowledge-intensive_ tasks [10].
A common and straightforward way to address temporal misalignment in textual data is to create new models (or update old ones) with the most recent data available [17, 18, 10]. However, continually growing datasets incur an increase in computational costs for data acquisition and training models which also contributes to an ever-increasing environmental cost [19, 20]. Therefore, finding a solution outside of continuous retraining that preserves model performance over time is desirable.
In this paper, we follow Dhingra et al. [7] who use an alternative approach that modifies the textual input with its timestamp. Thus, we can take advantage of text-only pre-trained language models used for classification in addition to conditioning the models with the temporal context for the input.
We will outline our system, which is aligned with some of the recent works in NER and temporal misalignment, and evaluate it on the _LongEval-Classification_ benchmark [1].
Our contribution is two-fold: (1) We show that date-prefixing the input text with its timestamp conditions the outputs of a language model on the temporal context of the input. (2) We utilize an augmentation strategy that leverages the date-prefixing by randomly modifying the timestamp of unlabeled inputs. We show that this augmentation strategy improves the performance benefits of semi-supervised learning on unlabeled data.
## 2 Background and Related Work
Recently, _TempLama_[7] showed that directly placing the year of the timestamp as a prefix in the text is performative in the context of named-entity-recognition. They, then feed the date-prefixed inputs to a T5 [21] model to directly model the temporal context. Cao and Wang [22] directly compares a date-prefixing approach to an embedding approach where the date is numerically embedded with a linear projection. [22] in the context of text generation, found that linear projection was less sensitive to the timestamps while date-prefixing is better at generating more temporally sensitive facts.
Self-labeling (or self-distillation) is a semi-supervised learning strategy that typically involves learning from pseudo-labels for unlabeled data. Self-labeling is demonstrated to add performance gains across a variety of domains including text classification [23]. Agarwal and Nenkova [9] found that self-labeling performs better than specialized pre-training objectives such as domain-adaptive pretraining [24] across several tasks including sentiment analysis. However, it is important to note that recently Ushio et al. [25] have shown that self-labeling, as presented in [9], is not as effective for NER when compared to models trained for specific time periods.
## 3 Methodology
Figure 1 provides an overview of our system. Following Agarwal and Nenkova [9], we first train a teacher model on the full labeled dataset to create pseudo-labels for the unlabeled data. During this training phase, every sample in the labeled dataset is date-prefixed, meaning that the year of the timestamp is included as part of the input text. We use a novel augmentation strategy on the date prefixes (see Section section 3.3) to condition the pseudo-labels on the temporal context learned by the teacher. A new student model is then trained for 22000 training steps on the generated pseudo-labels and is subsequently trained on the original labeled data that was used for the teacher. Finally, we use the resulting student model for inference. For simplicity, both the teacher and student models share the same architecture. We provide further detail on the individual components of our system in the following sections.
### Pre-Trained Model
Using a pre-trained language is generally much better than training a new model from scratch. However, it is not always clear which pre-training works best for any particular task. Here we use Bernice [26] a variant of XLM-RoBERTa [6] specialized for Twitter data. We train a single model for inference on the test set and we do not rely on ensembling techniques. We train using the cross-entropy classification loss.
### Date-Prefixing
Consistent with Dhingra et al. [7] we prefix each input text with the year of the given time-stamp followed by the text itself (e.g. "year: 2023 text: I really do enjoy drinks with friends"). As we observe from Table 1 training on this data conditions the model outputs with the temporal context found in the data using date-prefixing. Table 1 provides real input and output examples based on a trained model across various years. We do not modify the architecture of the language model to take the timestamp as a vector input. By maintaining the use of textual-only input we are able to leverage any existing pre-trained models that have text-embedding only input.
Figure 1: **Method Overview: (top-row) summarization of our semi-supervised learning training pipeline stages, (bottom-row): modifications we made to the pipeline and at what stage they apply**
### Date-Prefix Augmentation
When creating pseudo-labels to train a student model we use an augmentation strategy that takes advantage of our date-prefixing. Namely, given an unlabeled sample and its timestamp we randomly replace the year in the timestamp with a year between 2013 and 2021. Where, the years 2013 and 2021 are the earliest and latest years found in the labeled datasets, respectively. We perform an ablation experiment (see Section 4) demonstrating that this augmentation strategy outperforms non-augmented self-labeling on the evaluation set.
### Training and Evaluation
We use a single model trained using both the training and development sets for two epochs for inference on the test set. Model parameters using the Adam optimizer [27] with a constant learning rate of 1e-5 using the binary-cross-entropy loss. Performance is measured using the macro-averaged F1 score of the future samples.
## 4 Experiments
### Experimental Setup
In this section, we will compare the performance of models trained with and without the proposed augmentation strategies for pseudo-label generation. Namely, we will use a trained teacher model to generate labels with and without date-prefix augmentation. Subsequently, we a student models on each of the two sets of pseudo labels for 6000 training steps. Finally, then compare the downstream performance of each model.
Models will only be provided labels for the training set and trained until saturation on the interim evaluation set. For our experiments, we report the macro-averaged F1 scores for each subset of the evaluation set. We will also report the Relative Performance Drop (RPD) [2] for comparison between short and long-term time differences with respect to model performance.
\[\text{RPD}=\frac{f_{t_{j}}^{score}-f_{t_{0}}^{score}}{f_{t_{0}}^{score}} \tag{1}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Example input & Output & Label & Original Year & Prefix Year \\ \hline “year: 2013 text: I really do enjoy being single” & \(0.503\) & positive & 2018 & 2017 \\ “year: 2018 text: I really do enjoy being single” & \(0.510\) & positive & 2018 & 2018 \\ “year: 2023 text: I really do enjoy being single” & \(0.495\) & negative & 2018 & 2023 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Date Prompt Conditioning:** A demonstration of the date-prompting and subsequent model outputs conditioned on the prefix year. The model output is between 0 and 1, where the input is considered positive only if the output is above 0.5. The example input text is taken from the _LongEval-Classification_ dataset [1].
### Results
We report the evaluation results of our experiments in Table 2. Indeed, we see an overall improvement in performance especially when we observe the'short' evaluation set results when using our full framework. Additionally, the model using date-prefix augmentation gives by far the best RDP of \(-0.0532\) with respect to the 'within' and'short' evaluation sets. Note that the non-augmented models gives the best RDP of \(-0.0411\) with respect to the 'within' and 'long' evaluation sets. However, when finetunning this same model on the gold labels, the RPD more than doubles to \(-0.0852\) and is much worse than our full framework with \(-0.0681\). A similar drop in performance can be seen when observing the F1 score on the 'long' evaluation set. It appears that fine-tuning the non-augmented model with clean data incurs a significant drop in performance. However, it is clear that our proposed augmentation strategy can leverage the older labeled data and attain significant performance gains.
## 5 Conclusion
In this paper, we introduce a competitive framework for preserving the performance of sentiment analysis models across various temporal periods. We promote date-prefixing, as a straightforward solution to condition the output of pre-trained language models with the temporal context of input text. Furthermore, we build on the self-labeling framework developed by Agarwal and Nenkova [9]. Namely, given our date-prefix formatting, we can generate pseudo-labels conditioned on the temporal context of the input text. We verify the performance gains of our proposed system against self-labeling without our augmentation strategy in our ablation experiments. Altogether, our system yields competitive performance in overall score and attains the best RPD for the short evaluation set [3].
|
2309.13204 | Challenges in Quasinormal Mode Extraction: Perspectives from Numerical
solutions to the Teukolsky Equation | The intricacies of black hole ringdown analysis are amplified by the absence
of a complete set of orthogonal basis functions for quasinormal modes. Although
damped sinusoids effectively fit the ringdown signals from binary black hole
mergers, the risk of overfitting remains, due to initial transients and
nonlinear effects. In light of this challenge, we introduce two methods for
extracting quasinormal modes in numerical simulations and qualitatively study
how the transient might affect quasinormal mode fitting. In one method, we
accurately fit quasinormal modes by using their spatial functional form at
constant time hypersurfaces, while in the other method, we exploit both spatial
and temporal aspects of the quasinormal modes. Both fitting methods leverage
the spatial behavior of quasinormal eigenfunctions to enhance accuracy,
outperforming conventional time-only fitting techniques at null infinity. We
also show that we can construct an inner product for which the quasinormal
eigenfunctions form an orthonormal (but not complete) set. We then conduct
numerical experiments involving linearly perturbed Kerr black holes in horizon
penetrating, hyperboloidally compactified coordinates, as this setup enables a
more precise isolation and examination of the ringdown phenomenon. From
solutions to the Teukolsky equation, describing scattering of an ingoing
gravitational wave pulse, we find that the contributions from early-time
transients can lead to large uncertainties in the fit to the amplitudes of
higher overtones ($n\geq 3$). While the methods we discuss here cannot be
applied directly to data from merger observations, our findings underscore the
persistence of ambiguities in interpreting ringdown signals, even with access
to both temporal and spatial information. | Hengrui Zhu, Justin L. Ripley, Alejandro Cárdenas-Avendaño, Frans Pretorius | 2023-09-22T22:57:25Z | http://arxiv.org/abs/2309.13204v3 | Challenges in Quasinormal Mode Extraction: Perspectives from Numerical solutions to the Teukolsky Equation
###### Abstract
The intricacies of black hole ringdown analysis are amplified by the absence of a complete set of orthogonal basis functions for quasinormal modes. Although damped sinusoids effectively fit the ringdown signals from binary black hole mergers, the risk of overfitting remains, due to initial transients and nonlinear effects. In light of this challenge, we introduce two methods for extracting quasinormal modes in numerical simulations and qualitatively study how the transient might affect quasinormal mode fitting. In one method, we accurately fit quasinormal modes by using their spatial functional form at constant time hypersurfaces, while in the other method, we exploit both spatial and temporal aspects of the quasinormal modes. Both fitting methods leverage the spatial behavior of quasinormal eigenfunctions to enhance accuracy, outperforming conventional time-only fitting techniques at null infinity. We also show that we can construct an inner product for which the quasinormal eigenfunctions form an orthonormal (but not complete) set. We then conduct numerical experiments involving linearly perturbed Kerr black holes in horizon penetrating, hyperboloidally compactified coordinates, as this setup enables a more precise isolation and examination of the ringdown phenomenon. From solutions to the Teukolsky equation, describing scattering of an ingoing gravitational wave pulse, we find that the contributions from early-time transients can lead to large uncertainties in the fit to the amplitudes of higher overtones (\(n\geq 3\)). While the methods we discuss here cannot be applied directly to data from merger observations, our findings underscore the persistence of ambiguities in interpreting ringdown signals, even with access to both temporal and spatial information.
## I Introduction
According to general relativity, when two black holes merge they form a highly distorted black hole that then rings down a stationary Kerr black hole. The gravitational waves emitted during the ringdown are thought to be well described by linear black hole perturbation theory (_linear theory_ for short), as governed by the spin-\(\pm 2\) Teukolsky equation [1; 2]. The Teukolsky equation is a separable partial differential equation for a single complex scalar, the real and imaginary parts of which describe the two physical gravitational wave polarizations.
The Teukolsky equation (with physical boundary conditions) does not have mode solutions; instead it has exponentially decaying _quasinormal mode_ (QNM) solutions [3; 4]. It is thought that shortly after merger the ringdown signal is well described by a superposition of QNMs [5; 6; 7], and fits to numerical relativity results seem to confirm this expectation (e.g., [8; 9; 10; 11; 12]). At much later times (\(\mathcal{O}(100M)\) after coalescence, where \(M\) is the mass of the remnant), the signal is expected to transition to decay as a power law. This is the so-called "tail" part of the waveform [13; 14; 15; 16]. An interesting property of the frequencies of the QNMs is that they are uniquely determined by the remnant black hole's mass and spin. The reverse process of estimating the mass and spin of the remnant black hole from the QNM spectrum of the ringdown is known as _black hole spectroscopy_[17; 18; 19; 20; 21; 22].
To maximize the science that can be extracted from an observed ringdown--whether for measuring properties of the merger, or for testing general relativity--one needs a prediction for what the excitation amplitude of each QNM is for a given merger. At present, computing these excitation amplitudes is an open problem for a remnant formed in a merger of black holes with comparable mass (though some information can be gleaned from properties of the Green's function for Kerr perturbations [23], or using linear theory in the extreme mass ratio limit [24; 25; 26]). In lieu of such calculations, one can attempt to measure the excitation amplitudes directly from numerical relativity solutions of merger events. At present, such approaches typically assume the ringdown can be entirely described by a sum of linear QNMs, and attempt to find the best first set of amplitudes that reproduce the ringdown signal (see e.g. [8; 9; 11; 27; 28; 29]). These studies have demonstrated, for example, that in an astrophysical merger of a nearly equal mass, non-precessing binary, the \(l=m=2\) mode is the maximally excited QNM, and the relative excitation amplitudes of other angular modes may point to properties of the progenitor binary system, e.g. precession, eccentricity, mass ratio, etc.
There are many difficulties in attempting to ascribe excitation amplitudes to merger events from fits to numerical relativity waveforms. The main difficulty already present at the linear level is that the QNMs do not com
prise a complete basis of black hole perturbations, and the gravitational wave "perturbation" will contain a _transient part_ that can have a significant amplitude relative to the sum of QNMs, in particular in the time window of \(O(10M)\) around peak amplitude of the ringdown. Note that in this paper we use the phrase _transient part_ (or sometimes _prompt part_) to refer to the non-QNM part of a ringdown waveform [30]. Beyond the linear level a host of additional difficulties arise, including non-linear mode coupling (quadratic modes have only recently begun to be studied in full numerical relativity merger waveforms [31; 32; 33; 34; 35]), and the effects of back reaction of the gravitational wave energy. The latter complicates the questions of what the appropriate background is for computing linear perturbations about, and how good a constant amplitude approximation is for the early time linear QNM spectrum of the waveform (due in part to non-linear energy exchange between modes). Though these difficulties are not thought to have much effect on measuring the dominant fundamental \(l=m=2\) QNM, it is less clear how well higher overtones and harmonics can be extracted. As such there is still much debate within the gravitational wave community about which modes should be included in the ringdown fit (see e.g., [30; 11]).
Given the intrinsic complexity of the problem and since both non-modal and nonlinear effects could play a non-trivial role, several ways of analyzing and decomposing the ringdown signal from numerical simulations into QNMs have been proposed [36; 30; 11; 12; 32]. Most of these methods involve finding the best fit to the ringdown signal with a sum of damped sinusoids with quasi-normal mode frequencies1, using gravitational waveforms extrapolated to future null infinity, or through Cauchy-characteristic extraction (CCE). Though, as discussed above, the signal is expected to contain more than simply the set of linearly damped QNMs, and if we do not know _a priori_ what the transient part of the waveform is, it is easy to envision that this process could result in _overfitting_ : an erroneous QNM amplitude measurement due to overlap of the QNM mode with a transient part of the signal. Particularly susceptible to overfitting are the higher overtones, whose damping times are sufficiently rapid that unless they are excited with extremely high amplitude, only a small number of cycles, or even only a fraction of a cycle of the QNM will be visible above an effective noise floor set by numerical truncation error (assuming all other extraction systematics are under control). Some studies have already pointed to overfitting, by showing different fitting procedures can give different results for the QNM amplitudes of the ringdown of a black hole produced from a binary collision [30; 37].
Footnote 1: An exception to this procedure is Ref [32], where the authors eliminated the dominant modes through a linear filter. Another exception is Ref [36], where properties of spheroidal harmonics are explored to separate the prograde and retrograde contribution to the ringdown signal.
The main purpose of this paper is to gain more insight into the nature of mode fitting, and hence the problem of overfitting. Instead of studying the full nonlinear ringdown of a black hole produced from a binary collision, we attempt to reconstruct the quasinormal mode spectrum of solutions to the Teukolsky equation. This allows us to study in detail how easy it is to distinguish the transient contribution to the signal from the quasinormal modes2. In our fitting procedures we include the spatial profiles of the quasinormal mode eigenfunctions, which reduces systematic uncertainties in our fits. To aid in utilizing the spatial dependence of the QNMs in our fits we make use of horizon penetrating, hyperboloidally compactified (HPHC) coordinates, in which the QNM solutions to the Teukolsky equation are regular everywhere, from the black hole horizon to null infinity [38; 39]. We consider two fitting procedures to linear data: one that uses the spatial variation of the Weyl scalar field and its time derivative on a single constant time slice, and another that uses both spatial and temporal information. Within both procedures, fitting the quasinormal mode amplitudes reduces to a problem of linear regression, given the fixed black hole mass and spin.
Footnote 2: We expect the transient contribution to strongly depend on the initial data; here, we only focus on scattering experiment, where the initial data consist of an infalling pulse of gravitational wave onto the black hole.
We then apply these fitting procedures to a set of time domain numerical solutions to the Teukolsky equation. We demonstrate that with pure QNM initial data, we can stably recover the amplitudes of arbitrary linear combinations of QNMs. By _stable_ here we mean that the method recovers the correct amplitudes (to within truncation error) over a non-negligible window of time. When we consider scattering initial data with non-QNM contributions though, we find that we cannot stably extract the amplitude of higher (\(n\geq 3\)) QNM overtones, and the traditional time-only fit at future null infinity can only faithfully stably extract the fundamental and first overtone over a much narrow window for fitting time. Conversely, we demonstrate the power of using spatial information to establish a best-case scenario for extracting QNMs. We note that this paper is more a "proof of principle" for the linear case, in that we have not tried to optimize the linear perturbation to "best fit" any particular merger event, and leave a more extensive study of the issue of initial conditions to future work.
The rest of this paper is organized as follows. In section (II), we review the derivation of the Teukolsky equation in HPHC coordinates, our code for computing pure QNM initial data, and our code for evolving the Teukolsky equation in the time domain. In section (III), we introduce our two fitting procedures that make use of spatial information of the quasinormal modes. In section (IV), we show results from applying those two methods to numerical solutions to the Teukolsky equation with sev
eral different classes of initial data. Lastly, we compare our new fitting procedures with the traditional time-only fit at future null infinity in section (5). We discuss the implications of our results and conclude in section (6). In Appendices A, B, and C we discuss some details of computing the QNM eigenfunctions, their radial structure in HPHC coordinates, and give some convergence results from our code, respectively.
## II The Teukolsky equation on a hyperboloidal slice
In this section, we briefly review the Teukolsky equation and QNMs in HPHC coordinates. We refer the reader to Refs. [38; 39; 40; 41; 42] for further details.
The Teukolsky Equation (TE) [1] was first derived in Boyer-Lindquist (BL) coordinates [43]. Constant time hypersurfaces in BL coordinates do not penetrate the black hole horizon, nor do they reach future null infinity - instead, they stretch from the bifurcation sphere of the black hole to spatial infinity. One consequence of these properties is that the radial eigenfunctions for quasinormal modes are necessarily divergent at the asymptotic radial boundaries (\(r_{*}\rightarrow\pm\infty\), where \(r_{*}\) is the tortoise coordinate) when evaluated on constant time slices [1]. This feature of the quasinormal eigenfunctions (QNEs) in BL coordinates complicates the analysis of computing QNM excitation factors of black hole ringdown. This is because constructing a well-defined inner product (from which the excitation factors of the quasinormal modes can be computed) involves an integration in the complex plane [44; 45; 23; 46]. By contrast, since constant time hypersurfaces in HPHC coordinates span from the black hole horizon to future null infinity, the QNM solutions to the TE in these coordinates remain regular everywhere exterior to the black hole [38; 39]. This opens up the possibility of a simpler inner-product matrix that could be used to determine the quasinormal mode content of a given gravitational waveform (see, for example, Ref. [47]). Furthermore, the ringdown signal behaves like damped standing waves spatially in HPHC, instead of traveling wave packets in coordinates that asymptote to spatial infinity.
In this work we use the same HPHC coordinates described in Ref. [39]. These coordinates are identical to BL coordinates, up to a redefinition of the time coordinate \(\tau\) and azimuthal coordinate \(\phi\), which are related to the BL coordinates \(\left(t,r,\vartheta,\varphi\right)\) via
\[d\tau\equiv dt+\left(\frac{2Mr}{\Delta}+\frac{dh}{dr}\right)dr,\qquad d\phi \equiv d\varphi+\frac{a}{\Delta}dr, \tag{1}\]
where \(M,a\) are the mass and spin of the black hole. Here \(h(r)\) is a "height" function designed to make the radially ingoing characteristic speed zero at infinity [48; 41; 42], that we chose to be
\[\frac{dh}{dr}=-1-\frac{4M}{r}. \tag{2}\]
To bring future null infinity (located at \(r\rightarrow\infty\)) to a finite point, we compactify the radial coordinate via
\[\rho\equiv\frac{1}{r}. \tag{3}\]
We additionally rescale the Newman-Penrose scalar \(\psi\) to make the Teukolsky equation regular at the horizon and to remove the "long-range potential" in the radial coordinate [49; 50]
\[\psi\equiv\frac{1}{r}\Delta^{-s}\Psi. \tag{4}\]
With all the above definitions, the TE reads
\[\left[16M^{2}-a^{2}\sin^{2}\theta+8M\left(4M^{2}-a^{2}\right) \rho-16a^{2}M^{2}\rho^{2}\right]\partial_{\tau}^{2}\Psi-\rho^{4}\Delta\partial _{\rho}^{2}\Psi-{}_{s}\not{\Delta}\Psi\] \[-2\left[1+\left(a^{2}-8M^{2}\right)\rho^{2}+4a^{2}M\rho^{3} \right]\partial_{\tau}\partial_{\rho}\Psi+2a\rho^{2}\partial_{\rho}\partial_ {\phi}\Psi+2a\left(1+4M\rho\right)\partial_{\tau}\partial_{\phi}\Psi\] \[+2\left[s\left(-2M+ia\cos\theta\right)+\left(4M^{2}\left\{s+2 \right\}-a^{2}\right)\rho-6Ma^{2}\rho^{2}\right]\partial_{\tau}\Psi\] \[+2\left[-1-s+\left(s+3\right)M\rho-2a^{2}\rho^{2}\right]\rho \partial_{\rho}\Psi+2a\rho\partial_{\phi}\Psi\] \[+2\left(Ms+M-a^{2}\rho\right)\rho\Psi =0, \tag{5}\]
where \(s\) is the spin-weight of the scalar \(\Psi\). For the remainder of this article, we set \(s=-2\), so that \(\Psi\) corresponds to the Weyl scalar \(\Psi_{4}\).
Lastly, to make the radial boundary independent of the black hole spin, we perform the substitution:
\[\rho\to r_{+}\rho, \tag{6}\]
where \(r_{+}=M+\sqrt{M^{2}-a^{2}}\) is the radius of the outer horizon in BL coordinates. This substitution makes the TE regular at future null infinity (\(\rho=0\)) and on the black hole horizon (\(\rho=1\)), regardless of spin.
We solve Eq. (5) in the time domain, using a modification of the code described in Refs. [42; 51], which we will now briefly describe. The numerical implementation decomposes \(\Psi\) into its azimuthal modes, \(\Psi\left(t,\rho,\theta,\phi\right)=\sum_{m}e^{im\phi}\Psi\left(t,\rho,\theta\right)\). The code then evolves each \(m-\)mode on a two dimensional \(\rho-\theta\) grid. The angular direction is discretized using a pseudospectral method made up of
spin-weighted spherical harmonics, and the radial direction with a fourth order finite difference method, as opposed to the implementation presented in Ref. [42], which makes use of a pseudospectral Chebyshev discretization in the radial direction. To evolve in time, the code uses a method-of-lines algorithm with a 4th order Runge Kutta integrator. We consider two classes of initial data, described in more detail in Sec. (IV): (1) a linear superposition of quasinormal modes, and (2) a Gaussian pulse (which we call "scattering" initial data). We construct our quasinormal mode initial data using a slight modification, described in detail in Appendix (A), of the algorithm presented in Ref. [39] (publicly available at [51].)
## III Spatial and Spacetime Fitting with QNM Eigenfunctions
Let us consider a linearly perturbed black hole with fixed known mass and spin. Since the quasinormal mode decomposition of the solution can be recovered using a linear least squares algorithm if the linearized gravitational solution can be entirely described as a superposition of quasinormal modes [11; 33], we fix the quasinormal mode frequencies, and then fit for the complex amplitudes of the modes that minimize the residual error. In our fitting procedures, we minimize not just the residual error of our waveform fit at future null infinity, but also the error of the waveform over the entire computational domain, which ranges from the horizon to null infinity.
We consider two different mode extraction methods: _spatial_ and _spacetime_ fitting, which we describe in detail in Sec. (III.1) and Sec. (III.2), respectively. Spatial fitting refers to measuring the amplitudes for each QNM on a fixed time slice \(t=t_{0}\), given the data \(\{\Psi_{4}(t_{0},\mathbf{r}),\partial_{t}\Psi_{4}(t_{0},\mathbf{r})\}\)[23; 34; 52]. That is, for a fixed azimuthal number \(m\), we minimize the residual
\[\mathcal{R}= \sum_{i,j}\left(\Psi_{4}\left(t_{0},\rho_{i},\theta_{j}\right)- \sum_{[p],n,l}A_{[p]ln}R_{[p]ln}\left(\rho_{i}\right){}_{-2}S\left(a\omega_{[ p]ln},\theta_{j}\right)e^{-i\omega_{[p]ln}t_{0}}\right)^{2}\] \[+\left(\partial_{t}\Psi_{4}\left(t_{0},\rho_{i},\theta_{j}\right)+ \sum_{[p],n,l}i\omega_{[p]ln}A_{[p]ln}R_{[p]ln}\left(\rho_{i}\right){}_{-2}S \left(a\omega_{[p]ln},\theta_{j}\right)e^{-i\omega_{[p]ln}t_{0}}\right)^{2}, \tag{7}\]
for the complex constants \(A_{[p]ln}\), where \({}_{-2}S\) are the spin-weighted _spheroidal_ harmonics, and \(R_{[p]ln}(\rho)\) and \(\omega_{[p]l^{\prime}n}\) are the QNM radial eigenfunctions and frequencies, respectively. In the above expression, the sum is over the prograde and retrograde (\([p]=\pm\)) modes, the overtones \(n\), angular number \(l\), radial grid points \(\rho_{i}\), and angular gridpoints \(\theta_{j}\). In practice, we perform a spherical harmonic decomposition of the signal in \(\theta\) before minimizing the residual.
On the other hand, the spacetime fitting consists of finding the best quasinormal mode fit to the rescaled Weyl scalar \(\Psi_{4}\) over the entire time domain we evolve for, i.e., in _both_ space and time. Specifically, we minimize the residual
\[\mathcal{R}=\sum_{i,j,k}\left(\Psi_{4}\left(t_{k},\rho_{i},\theta_{j}\right)- \sum_{[p],n,l}A_{[p]ln}R_{[p]ln}\left(\rho_{i}\right){}_{-2}S\left(a\omega_{[ p]ln},\theta_{j}\right)e^{-i\omega_{[p]ln}t_{k}}\right)^{2}\, \tag{8}\]
where now we include a sum over the time steps \(t_{k}\). As we discussed above, both fitting methods differ from previous QNM fitting procedures as our residual includes the radial profile of the modes.
If the gravitational waveform is dominated by quasinormal modes, our fitting procedure provides a robust way to determine the quasinormal mode content of a gravitational waveform. We now provide specific details of both approaches.
### Spatial Fitting
In this approach, we find a sum of the QNE with amplitudes that best represent the data \(\{\Psi,\partial_{t}\Psi\}\) on a constant time hypersurface [54, 23, 23]. At intermediate times \(t\), i.e. after initial data transients have decayed but before the tail contributions are evident, we expect the linear gravitational wave to be well approximated by a sum of quasinormal modes. In this regime, the field and its time derivative on a constant time slice, \(t_{0}\), can then be approximated by:
\[\Psi_{4}(\rho,\theta,t_{0})=\sum_{p\in\{\pm\}}\sum_{n}\sum_{l}A_{[p]ln\ -2}Y_{l}(\theta)\sum_{l^{\prime}}c_{[p]ll^{\prime}n}\ R_{[p]l^{\prime}n}( \rho)\exp\{-i\omega_{[p]l^{\prime}n}t_{0}\}\, \tag{9}\]
\[\partial_{t}\Psi_{4}(\rho,\theta,t_{0})=\sum_{p\in\{\pm\}}\sum_{n}\sum_{l}A_{[ p]ln\ -2}Y_{l}(\theta)\sum_{l^{\prime}}\left(-i\omega_{[p]l^{\prime}n}\right)c_{[p]l^{ \prime}n}\ R_{[p]l^{\prime}n}(\rho)\exp\{-i\omega_{[p]l^{\prime}n}t_{0}\}\, \tag{10}\]
where \(c_{[p]l^{\prime}n}\) are the spherical-spheroidal mixing coefficients, \(-Z_{l}\) are the spin-weighted _spherical_ harmonics, and \(R_{[p]ln}(\rho)\) and \(\omega_{[p]l^{\prime}n}\) are the QNM eigenfunctions and frequencies, respectively.
We can rewrite Eqs. (9) and Eq. (10) as a matrix equation for the amplitudes \(A_{[p]ln}\). In terms of the spherical harmonics for \(\Psi_{4}\), we may write for each angular number \(l\)
\[M_{[p]ll^{\prime}n}(\rho_{i})A_{[p]l^{\prime}n} =\Psi_{4,l}(\rho_{i}) \tag{11a}\] \[-i\omega_{[p]l^{\prime}n}M_{[p]l^{\prime}n}(\rho_{i})A_{[p]l^{ \prime}n} =\partial_{t}\Psi_{4,l}(\rho_{i})\, \tag{11b}\]
where repeated indices are summed over3, and
Footnote 3: The \(i\)’s in parenthesis and as subscripts index the radial grid points, \(\sqrt{-1}\) otherwise.
\[\Psi_{4,l}(\rho,t) :=\int_{\theta}\Psi_{4}(\rho,\theta,t)\ _{-2}Y_{l}^{*}(\theta)d\theta, \tag{12}\] \[M_{[p]l^{\prime}n}(\rho_{i}) :=c_{[p]l^{\prime}n}\ R_{[p]l^{\prime}n}(\rho_{i})\exp\{-i\omega_{ [p]l^{\prime}n}t_{0}\}. \tag{13}\]
The QNM amplitudes \(A_{[p]l^{\prime}n}\) must simultaneously solve equations (11a) and (11b) for all \(l\), which we do numerically. Here, we can simply stack the two matrix equations in the radial direction (indexed by \(i\)) and solve the resultant equation by a minimization matrix solver. Specifically, we stack via the following rule
\[N_{[p]l^{\prime}n}(i)=\begin{cases}M_{[p]l^{\prime}n}(\rho_{i})&\text{if }i \leq i_{max}\\ -i\omega_{[p]l^{\prime}n}M_{[p]l^{\prime}n}(\rho_{i-i_{max}})&\text{if }i>i_{max},\end{cases} \tag{14}\]
and, similarly, for the right hand side of Eqs. (11):
\[b_{l}(i)=\begin{cases}\Psi_{4,l}(\rho_{i})&\text{if }i\leq i_{max}\\ \partial_{t}\Psi_{4,l}(\rho_{i-i_{max}})&\text{if }i>i_{max}\,\end{cases} \tag{15}\]
where \(i_{max}\) is the number of radial grid points. Under this procedure, Eqs. (11) can now be written as
\[N_{IJ}A_{J}=b_{I}, \tag{16}\]
where \(I\) indexes the spatial components (radial \(i\) and angular \(l\),) and \(J\) indexes the modes (prograde/retrograde \([p]\), angular index \(l^{\prime}\), and overtone number \(n\).) The matrix \(N_{IJ}\), where we pack the fitting basis functions as column vectors, is called the _design matrix_ (see, e.g., Ref. [55]).
We find that when the initial data is a pure, arbitrary superposition of QNMs, we correctly recover the amplitudes and phases of the modes when we solve the matrix equation (16). The design matrix induces an inner product via
\[P(\Psi_{[p]ln},\Psi_{[p^{\prime}]l^{\prime}n^{\prime}}) :=\frac{1}{2}\langle N^{-1}\Psi_{[p]ln},N^{-1}\Psi_{[p^{\prime}]l^ {\prime}n^{\prime}}\rangle\] \[=\delta_{pp^{\prime}}\delta_{ll^{\prime}}\delta_{nn^{\prime}}, \tag{17}\]
where \(\langle\cdot,\cdot\rangle\) denotes the usual inner product on \(\mathbb{C}^{d}\), and \(\Psi_{[p]l}\) and \(\Psi_{[p^{\prime}]l^{\prime}n^{\prime}}\) are constructed from the quasinormal mode eigenfunctions, as in Eq. (15). We numerically find that the design matrix defined in Eq. (14) has full rank and its right inverse exists as long as the overtones are radially resolved, with \(i_{max}\gg n_{max}\), i.e., there are more radial points than overtones in our fit. For given numerical data, we can determine the QNM amplitudes by computing \(N^{-1}b\).
### Spacetime Fitting
We now minimize a quadratic residual, as with the spatial fitting, but now we also sum over the different time steps. When the linear gravitational wave is dominated by QNMs, the fitting problem reduces again to solving Eq. (9) for the amplitude \(A_{[p]ln}\)'s, given numerical data \(\{\Psi_{4}(\rho,\theta,t)\}\) within \(t\in[t_{0},t_{1}]\).
As in the spatial fit, we decompose \(\Psi_{4}\) into spin-weighted spherical harmonics. Discretizing the radial (\(r_{i}\)) and time (\(t_{j}\)) coordinates, the design matrix now takes
the form:
\[M_{[p]ll^{\prime}n}(\rho_{i},t_{j})=\] \[\left(-i\omega_{[p]l^{\prime}n}\right)c_{[p]ll^{\prime}n}\ R_{[p]l^ {\prime}n}(\rho_{i})\exp\{-i\omega_{[p]l^{\prime}n}t_{j}\}\, \tag{18}\]
where now the right-hand-side is set to be the field for the entire spacetime perturbation:
\[b_{l}(\rho_{i},t_{j})=\Psi_{4,l}(\rho_{i},t_{j}). \tag{19}\]
The spacetime fitting as a matrix equation is then
\[M_{IJ}A_{J}=b_{I}, \tag{20}\]
where \(I\) now indexes _both_ the temporal and spatial components (time \(j\), radial \(i\), and angular \(l\)), and \(J\) indexes the modes by \([p]\), \(l^{\prime}\), and \(n\). In Sec. (IV.2) and Sec. (IV.3) we demonstrate that the spacetime fit results are consistent with the spatial fit in regimes where we expect QNMs to dominate the solution.
Before going to the numerical examples, we first briefly discuss the incompleteness of the QNEs as a function basis for general solutions to the TE, and a resulting caveat of fitting QNMs due to the presence of non-modal solutions of the TE.
### A Caveat: Mode vs. Transient
While we can define an inner product under which the QNMs are orthonormal by making use of their radial and angular information, the modes remain incomplete as a basis for fitting black hole ringdown. By incompleteness, we mean that a generic solution to the TE cannot be represented as a sum of QNMs, when the solution violates the physical boundary condition: no ingoing wave at the horizon and no ingoing wave at infinity4. As we already mentioned, in addition to QNMs, solutions to the TE also admit a "prompt" and a "tail" contribution [57; 58], the sum of which we refer to as the "transient" part of the solution. Prompt here relates to the kind of perturbations we expect following a black hole merger, or scattering a compact wave packet off the black hole, and refers to the early rise in the waveform before the QNMs dominate. The tail part of the solution arises from back-scattered gravitational waves on the Kerr geometry, and dominates the solution at late times (beyond the times considered in this paper).
Footnote 4: Ref [56] suggests completeness of QNMs as a basis for solutions to the TE that respect the physical boundary conditions.
At the linear level there are no prompt or tail contributions to the solution if the initial data consists of purely quasinormal modes. However, for more generic initial data that better describes a distorted black hole formed from an astrophysical merger, there will be prompt and tail contributions [23; 30]. In these more generic settings, assuming the signal is purely made up of QNMs and fitting to those can lead to biased results, in particular for the high-\(n\) overtones, as they typically decay quite rapidly and on similar time scales to the prompt part of the transients. As we mentioned earlier, we call this overfitting the signal.
The prompt response dies off rapidly in time as it is sourced over a relatively small spacetime volume around the remnant black hole at the time of merger, and the corresponding wavefronts essentially follow geodesic trajectories to either future null infinity or the black hole horizon. Starting the QNM fit at later times should reduce the bias caused by the contribution of this transient response in the signal. However, the exact form of prompt response depends heavily on the initial data; in some cases one might expect it to be large enough, and decay slowly enough, to mask the higher overtones. By contrast, the tail contribution decays in a power-law fashion in time, slower than the QNM contribution [13]. Thus, the tail response may bias quasinormal mode fitting at late times (provided the signal to noise ratio of the signal was large enough to resolve a late time signal).
To assess the quality of our fitting results when non-QNM contributions to the solution are present, we adapt the technique presented in Refs. [30; 33; 34]. Namely, we vary the start time of the spacetime fitting, or time at which we apply the spatial fitting, and check if the amplitude for each quasinormal mode remains constant. We discuss the results of this exercise in Sec. (IV.2) and Sec. (IV.3).
## IV Numerical examples
Here we present some examples of applying the proposed spatial and spacetime fitting to numerical solutions of the TE with different initial conditions, as described in Sec. (II). Unless otherwise mentioned, all simulation results presented here were from runs with resolution \(n_{\rho}=512\) and \(n_{\theta}=40\), where \(n_{\rho}\) and \(n_{\theta}\) are the number of radial and angular grid points respectively, (see Appendix C for convergence results).
First, in Sec. (IV.1) we evolve initial data that consists of a single QNM, to demonstrate the accuracy of our evolution and (quasinormal mode) initial data codes, described in the Appendix (A). In Sec. (IV.2), we move to a more complicated class of initial data: a superposition of QNMs. In this case, we demonstrate that we can still reliably recover the amplitudes of the QNMs up to numerical precision of the solution, using both fitting techniques.
Lastly, in Sec. (IV.3) we consider scattering initial data (that is, initial data that cannot be written as a pure sum of quasinormal modes). In this case, we also extract the QNM content from the signal, although we do not have a direct theoretical estimate for the QNM amplitudes for this class of initial data. We do demonstrate that both the spatial and spacetime fitting methods are consistent, in the sense that both yield identical estimates for the
QNM amplitudes given the same initial data, and such estimates are stable with respect to fitting time at least for the fundamental mode and the first two overtones5. We further point out that the instability of fitting to the \(n\geq 3\) overtones for scattering initial data is likely due to the presence of the transient solution masking the high-\(n\) overtone spectrum, though their initial excitation amplitudes might be lower than that from black hole mergers.
Footnote 5: Due to how long-lived the transient is in the case of a near-extremal black hole, for that example we can only extract the fundamental mode during the numerical integration time of 500M.
### Evolving a single QNM
Let us consider the evolution of a single QNM for both Schwarzschild and near extremal Kerr (\(a=0.999\)) backgrounds. We set the initial condition to be either the \(l=m=2\) fundamental mode (\(n=0\)), or the \(l=m=2,\ n=3\) overtone6. As illustrated in Fig. (1), we can accurately evolve both the fundamental mode and overtone (blue solid lines) as compared to the analytic solution (red dashed lines). The residuals at future null infinity between the analytical solution and these runs are plotted in black. As shown in Appendix (C), this residual converges to zero at \(4^{\text{th}}\) order with increasing resolution.
Footnote 6: With our implementation we can evolve even higher overtones accurately; we only show the \(n=3\) overtone as an example.
These results are a strong test of our evolution code: if an overtone is excited, the code is capable of capturing it up to numerical precision. This accuracy provides the foundation for our following analysis.
### Evolving and fitting to a superposition of QNMs
In this section, we consider initial data that consists of a superposition of QNMs. We demonstrate that the spatial and spacetime fitting procedures, proposed in Sec. (III), can also correctly extract the QNM amplitudes in this case.
Let us consider initial data constructed by superposing the \(l=m=2\) fundamental (\(n=0\)), fourth (\(n=4\)), and the fifth (\(n=5\)) overtones on a Kerr background with spin \(a=0.7\) (the expected remnant spin from the astrophysical merger of equal mass, non-spinning, quasi-circular binaries [59]). In Fig. (2), we show the amplitudes extracted by applying the spatial fit at different \(t=\) constant surfaces (colored, solid lines) which match, up to numerical error, the analytical values of the mode amplitudes (grey, dashed lines). As a check, we have also included overtones that _are not_ present in the initial data in our fit, to demonstrate that the results have amplitudes consistent with the numerical error of our initial data and time evolution code. This test demonstrates the robustness of our fitting procedure, at least when applied to linearized solutions to the Einstein equations, with purely QNM initial data.
Furthermore, as in, e.g., Ref. [30], in Fig. (3) we show the stability of the fitting by factoring out the known decay rates of the modes. By doing that, the resulting QNM amplitudes are expected to be constant when fitting at different times, i.e., we consider the extraction of a given mode to be _stable_ if we recover a roughly constant amplitude (and phase) over some relevant, non-negligible time period. We have also compared the results between the spatial fit (colored, solid lines) and the spacetime fit (colored, dashed lines). We find that both methods are capable of stably extracting all the QNMs present (even the \(5^{th}\) overtone) until their amplitudes reach a floor set by numerical truncation error. This suggests that the inner product presented in Sec. (III.1) indeed establishes orthogonality between modes, complementing recent analytical results [46; 56].
### Evolving and fitting to scattering initial data
For our final example, we apply our quasinormal mode fitting procedures to analyze scattering initial data. This type of initial data excites the black hole in a more complex manner than quasinormal mode initial data, and we anticipate that a prompt, non-QNM transient solution to the Teukolsky equation will be noticeable in the ring-down signal. Specifically, we consider an approximately ingoing Gaussian pulse7 as initial data:
Footnote 7: Gaussian in Boyer-Lindquist radial coordinate \(r_{BL}\).
\[\Psi_{4}(\rho,\theta) =\exp\left\{-\frac{(\rho^{-1}-r_{0})^{2}}{w^{2}}\right\}\ _{-2}Y_{l}(\theta) \tag{21a}\] \[\partial_{t}\Psi_{4}(\rho,\theta) =-\frac{\rho^{2}}{2+4\rho}\partial_{\rho}(\Psi_{4}(\rho,\theta))\, \tag{21b}\]
where \(r_{0}\) and \(w\) specifies the initial central location and width of the pulse, respectively. For the context of this paper, regardless of the black hole spin, we specify the angular part of the initial data to be purely \(l=m=2\) spin weight \(-2\) spherical harmonics, and the radial part as a Gaussian centered at \(r_{0}=8\)M with width \(w=1\). For Kerr black holes, we expect the \(l>2\) modes to also be excited due to spherical-spheroidal mixing. To account for this mixing, we include up to \(l=4\) modes when constructing the design matrices for fitting, and we include up to the \(n=5\) overtones, both prograde and retrograde, for the \(l=2\) modes and up to \(n=1\) for \(l=3,4\) modes, unless otherwise specified8.
Footnote 8: We checked that the quality of the fit does not improve upon adding more modes, either higher harmonics or overtones.
In Fig. (4), we show the fitting results for different modes after applying the spatial fit
spacetime fitting (dashed lines) procedures to the numerical data obtained with the scattering initial data on Kerr backgrounds with spins \(a=0\), \(a=0.7\) and, \(a=0.999\). To assess the stability of the fits over time, we anchored the amplitude of each mode at a common time \(t_{2}\), chosen (somewhat arbitrarily) to be the time when \(\Psi_{4}\) peaks at future null infinity; that is, we divide the mode amplitude at the time of fitting (the horizontal axis) by the expected amplitude evolution from the time \(t_{2}\) to the fitting time. The subsequent fit is then stable if the fitted amplitude and phase remain constant over an interval of fitting start times. For clarity, we have only plotted the overtones for the prograde, \(l=m=2\) mode in Fig. (4). The fundamental retrograde modes and higher multipoles from spherical-spheroidal mixing can also be extracted stably using our fitting methods.
Both fitting methods yield consistent results, although they inherently have different truncation error (or, loosely speaking, "noise") characteristics. We speculate that as the spacetime fitting method uses the late time signal, its effective signal-to-noise ratio decreases faster than the spatial fit's. Consequently, the mode amplitude computed from this method typically becomes unstable slightly earlier than that from the spatial fit. On the other hand, spatial fitting does not incorporate information from late times, hence it is more sensitive to the early-time transient, and consequently tends to become stable slightly later than the spacetime fitted amplitudes.
Overall, for these scattering experiments we found that we can stably extract the fundamental mode and the first two overtones for a period of at least \(\sim 15\) M, around or after the time of peak \(\psi_{4}\), for Kerr backgrounds not too close to extremal 9. However, the fitting for higher overtones was generically unstable10. Given that in the previous section we demonstrated that our code and fitting
Figure 1: Evolving single QNMs (\(n=0\) and \(n=3\), left and right panels, respectively) for (top) Schwarzschild and (bottom) near-extremal Kerr with \(a=0.999\). The numerical solution at future null infinity is shown with solid blue lines, while the analytical predictions are drawn with red dashed lines. Plotted in black is the residual/difference between the two, which convergence studies (see Appendix (C)) show arises purely from numerical truncation error.
algorithms are capable of solving for and extracting superpositions of QNMs with overtones higher than the second, and that linear theory tells us the difference between these two classes of initial data reside in the transient part of the solution (as discussed above), this suggests that the source of the fitting instability is the presence of transients, with scattering initial data. We note, however, the \(n\geq 3\) overtones could be more strongly excited during mergers, and hence still be stably fitted. We defer the study of such initial data to a future work.
## V A comparison to the traditional time-only fitting at future null infinity
In this section, we test the quality of a QNM fit from a traditional (time-only) fitting method, see, e.g. Ref. [11], and compare that against our fitting procedures. The time-only fit we employ here is equivalent to our spacetime fit restricted at future null infinity; namely it takes into account of the angular eigenfunction (spherical-spheroidal mixing) but without any radial information. In numerical relativity, it is common to estimate the value the waveform takes at future null infinity, either through extrapolating waveforms measured at several finite radii, or through a Cauchy characteristic extraction/matching [62; 63; 64; 65; 66; 67; 68]), and then to find the best fit QNMs using fits to the temporal evolution of a select set of angular harmonics of the waveform at infinity.
We will assess the quality of these temporal fits in two ways. First, we consider the stability of the fitting, namely how well one can stably extract the amplitude and phase when changing the fitting start time [30]. Second, we consider the recovery of spatial information by testing whether the extracted mode amplitudes from performing time-only fits at several different radii agree with the radial eigenfunctions for the modes.
### Stability of the time-only fitting
In Fig. (5) we compare the results from the time-only fit at future null infinity (dotted lines) to the spatial fitting (solid lines), applied to the Gaussian pulse scattering experiment described in Sec. (IV.3). We find that, for Schwarzschild and Kerr with \(a=0.7\), the time interval over which the first overtone's (\(n=1\)) amplitude and phase can be stably extracted is much shorter in duration with the time-only fit, while the second and higher overtones can never be stably extracted with the time-only fit11. Interestingly, we find that when applying the time-only fit to the \(a=0.7\) Kerr black hole, the first overtone
Figure 2: Extracted amplitudes when applying the spatial fit (colored solid lines), described in Sec. (III.1) for an initial superposition of QNMs (\(n=0\), \(n=4\) and \(n=5\)) on a Kerr background with \(a=0.7\). Modes that are present in the initial data are plotted in bold lines. The grey dashed lines show the expected mode amplitude given our QNM initial conditions. The amplitudes for all modes are recovered to numerical precision, while the modes that are not present in the initial data have extracted amplitudes consistent with truncation error.
Figure 3: Stability of fitting superpositions of QNMs (\(n=0\), \(n=4\) and \(n=5\)) on a Kerr background with \(a=0.7\) (from the same run as shown in Fig. (2)). When factoring out the decay rate, the mode amplitudes we extract become constant in time, until the numerical noise dominates. The amplitudes extracted from the spacetime fit (colored dashed lines) are consistent with those obtained from the spatial fitting (colored solid lines), and both agree with the analytical results (grey, dashed lines). As expected, when fitting the overtones that were _not_ included in the initial data, the amplitudes are always unstable.
can be stably extracted _before_\(\Psi_{4}\) peaks at future null infinity. Whether the above holds in astrophysical ringdown (that is, with initial data that smoothly matches to the gravitational wave signal after merger) needs further study, but earlier results do indicate that at least with numerical relativity waveforms one can decompose the signal into QNMs beginning at the time of peak strain [11, 8], roughly 10 M before the peak of \(\Psi_{4}\) (issues of overfitting aside).
Footnote 1: The _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_of the _effective_ of the _effective_of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_effective_ of the _effective_ of the _effective_effective_ of the _effective_effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_ of the _effective_effective_ of the _effective_ of the _effective_ of the _effective_ _effective_ of the _effective_of the _effective
We now perform a waveform fit at several different fixed radii on numerical data with the scattering initial data described in Sec. (IV.3), and test if the fitted amplitude and phase for each mode at different radii agree with the prediction from the radial eigenfunction. We vary the fitting start time \(t_{0}\) for the time-only fit and evaluate the radial mismatch \(\mathcal{M}\) as a function of \(t_{0}\). The result is shown in Fig. (6). We find that for Schwarzschild, (with scattering initial data), the time-only fit can identify the fundamental mode and first two overtones. For Kerr with spin \(a=0.7\), we can only faithfully reconstruct up to the first overtone12. Lastly, in the near-extremal limit, the radial mismatch for the fundamental mode decreases as \(t_{0}\) increases because the transient decays away faster than the fundamental mode, yet none of the overtones are correctly recovered within the time span of our numerical integration (500 M).
Footnote 12: The marginal dip in the mismatch for the \(n=2\) mode around \(t_{0}=15\sim 20M\) hints at its existence for Kerr with \(a=0.7\), which we do extract stably using the spatial fit (see Fig. 5).
To illustrate the quality of the reconstructed radial structure, in Fig. (7), we plot the mode amplitudes as a function of radius from time-only fits against the expected radial eigenfunctions (grey, dashed lines), with \(t_{0}=t_{1}\) (colored, solid lines) and \(t_{2}\) (colored, dashed lines), the times at which the waveform peaks at the horizon and null infinity respectively. For visual comparison, the amplitudes for the known radial functions are set to agree with the time-only fit at future null infinity when \(t_{0}=t_{1}\) (solid lines), by construction.
As indicated by Fig. (6) and Fig. (7), the radial eigenfunctions are better recovered by the time-only fit at a surprisingly _early_ time (overfitting aside), except for the near-extremal case. We further note that our ability to extract the QNM radial variation through the time-only fit also depends on the initial data, which, as we already discussed, heavily impacts the form of the transient signal. We defer a detailed study of the initial data and interpretation of the seemingly better-behaved fittings at early time to a future work.
## VI Discussion
In this work we have presented two new techniques for extracting the quasinormal mode content from perturbed single black holes. The main novel aspect of the
Figure 6: Spatial mismatch \(\mathcal{M}\) (Eq. 23) for the time-only fit as a function of fitting start time \(t_{0}\), applied to scattering experiments for a Kerr black hole with \(a=0\) (top), \(a=0.7\) (middle), and \(a=0.999\) (bottom). The radial amplitude variation agrees relatively well with the known radial function when \(\mathcal{M}<10^{-3}\), i.e., outside the shaded region.
Figure 7: Mode amplitude radial variation from time-only fit-\(\mathrm{tinh}\) (colored lines) for scattering initial data (see Eq. (21)). Note that the vertical axis for each subplot has a different scale. We plot the measured radial amplitude variation with two fitting start times \(t_{1}\) (colored, solid lines) and \(t_{2}\) (colored, dashed lines), the time at which the waveform peaks at \(\mathcal{H}^{+}\) and \(\mathcal{J}^{+}\) respectively. The known radial functions are plotted as grey, dashed lines for comparison, whose amplitudes are chosen to match the solid colored lines at \(\mathcal{J}^{+}\). The seemingly better agreement for the overtones in the case of \(a=0.999\) near null infinity is likely due to the similar frequencies of the family of zero-damped modes in the extremal limit, i.e., approaching this limit the overtones do not decay much more rapidly that the fundamental mode.
fitting procedures is they utilize the radial structure of each QNM over the full exterior of the black hole spacetime. This is aided by our use of horizon penetrating, hyperboloidally compactified coordinates, in which the quasinormal mode eigenfunctions to the Teukolsky equation are well-behaved from the horizon to future null infinity.
We used the methods described in Refs. [39, 42] to solve the Teukolsky equation in the time domain, evolving initial data that can be a superposition of a chosen number of QNMs together with a more generic transient. We first showed that our fitting procedures are capable of stably extracting the correct amplitudes when the initial data consists of a superposition of pure QNMs, include rapidly decaying high overtones, until the corresponding amplitudes drop below a floor set by numerical truncation error. The reason that the fitting procedure works this well is that it uses more information about the waveform - namely its radial dependence. This drastically increases the number of data points available in the fit as compared to a fit at a fixed radius (or at a small number of radii). Moreover, by making use of the radial dependence of the modes, we can construct an inner product for which the quasinormal modes are orthogonal with respect to each other. This allows us to project out QNM amplitudes from a perturbation consisting of a pure sum of QNMs at a given fixed time slice, though such projection can still be biased in the presence of transients (defined as any non-QNM component of the perturbation).
With confidence in our ability to accurately extract QNMs in the absence of transients, we examined the linear excitation of a black hole through a prompt, compact pulse of gravitational wave. This investigation aimed to shed light on the issue of _overfitting_ when attempting to extract the excitation amplitudes of QNMs from numerical simulations of binary black hole mergers, which can lead to erroneous QNM amplitude measurements if transients are not accounted for. As we are using the Teukolsky equation for the evolution of the perturbation, we can only study the effects of the linear transients. However, since it seems unlikely that non-linear effects would help with the problem of overfitting, our study can be considered a best-case scenario for the theoretical measurement of excitation amplitudes. First, we showed that even using our new fitting algorithms, the presence of linear transients (with scattering initial data) prevented stable measurement beyond the \(2^{nd}\) overtone of the fundamental mode of the dominant \(\ell=m=2\) perturbation for a Schwarschild and an \(a=0.7\) Kerr black hole (for an \(a=0.999\) Kerr black hole we could only stably extract the fundamental mode during our integration time of \(500\)M).
We then compared our new fitting procedures to a more traditional time-only fit. This analysis showed that a time-only fit may result in erroneous amplitude for the first overtone of the \(\ell=m=2\) mode of an \(a=0.7\) Kerr black hole, outside an interval for fitting start time of order \(15\) M. Moreover, when performing the time-only fit at different radii, we found that the amplitude and phase one obtains from the fitting at each radius does not match the predicted behavior of the second (and higher) overtones of the quasinormal mode radial eigenfunctions (again, except for the case of Schwarzschild where the second overtone does match to reasonable precision). In the case of a near-extremal hole (\(a=0.999\)), only the fundamental mode can be faithfully extracted due to long-lasting transient instability near the horizon; the overtones might be present in the signal but require longer time of integration and higher numerical resolution.
A significant issue regarding extrapolating our results to what this implies for existing studies, which have attempted to extract mode amplitudes in the full non-linear case, is that we have not attempted to match our Gaussian-pulse perturbation of the black hole to the initial data of any particular merger event, as we do not have theoretical estimate for the excitation amplitudes of the overtones in merger case. Thus, though we expect similar issues to occur in the non-linear case at some overtone number \(n\) for any given angular mode, and expect overfitting to be worse in the non-linear case, to put a threshold on the cutoff overtone number from the linear problem would require a study using better adapted initial data.
The basic idea of the techniques described here - fitting for the spatial behavior of the quasinormal eigenfunctions and their dependence on time - could, in principle, be applied to fully nonlinear numerical relativity simulations of ringdown. However, there are several complications: the method requires a fixed gauge choice in the ringdown phase (potentially achievable through an appropriate gauge driver condition [69, 70]), and a careful treatment of wave-extraction in the strong field [71]. Additionally, spatial information could only be used far away from the remnant black hole, where the gravitational waves could be well described by linear (and possibly second-order) perturbation theory.
Our ability to set up pure mode initial data should allow for further studies of nonlinear, second-order effects during black hole ringdown. Studying the time evolution of pure quasinormal mode initial data in a non-linear/second order code would allow one to systematically study the efficiency of mode mixing. Additionally, with pure quasinormal mode eigenfunction initial data, one could study the functional form of the second order source term that appears in the solution to the second order vacuum Teukolsky equation [42, 72]. Doing so would allow us to study how the source term varies with different kinds of quasinormal modes, such as the overtones. We leave a study of these effects to future work.
Our setup-solutions to the Teukolsky equation with fitting procedures that use the entire waveform, not just its value at future null infinity-is arguably "optimal" for extracting the QNM signal. Specifically, the solutions we studied do not exhibit nonlinearities, allowing us to concentrate solely on the signal's prompt and QNM con
tributions. Do astrophysical mergers excite the overtones of the remnant black hole more cleanly compared to the scattering experiments proposed here? We leave this question to a future study to set up initial data describing the perturbed remnant black hole from merger calculations. Nevertheless, given the challenges of fitting for the overtones in this (simplified) setup, our results provide further evidence that fitting for the overtones in astrophysical or full numerical relativity data, as well as the interpretation thereof, is a highly sensitive process that depends significantly on the data extraction and fitting procedures employed.
## Acknowledgements
We thank Emanuele Berti, Will Farr, Elena Giorgi, Abhishek Hegade, Stefan Hollands, Lam Hui, Maximiliano Isi, Macarena Lagos, Lionel London, Nicholas Loutrel, Sizheng Ma, Keefe Mitman, Rob Owen, Harrison Siegel, and Zihan Zhou for their useful comments regarding various aspects and physical implications of this project. H.Z. especially thank Will Farr, Maximiliano Isi, Sizheng Ma, and Harrison Siegel for discussions regarding fitting procedures. The authors are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing. J.L.R. acknowledges support from the Simons Foundation through Award number 896696 and the NSF through WoU tidal 1-476799-244000-191100. A.C.-A. acknowledges support from the Simons Foundation.
## Appendix A Calculating quasinormal modes and eigenfunctions for the Teukolsky equation
As we discuss in Sec. (IV), one set of initial data we use consists of a linear superposition of quasinormal eigenfunctions (QNEs). To compute the quasinormal modes, we follow the algorithm presented in [39], except for one change which we found that allows us to stably solve for the higher overtones. Here we briefly outline the algorithm, and the improvement to it (the code we used can be accessed at [51]).
The Teukolsky equation (5) separates under the following decomposition
\[\Psi\left(\tau,r,\theta,\phi\right)=e^{im\phi-i\omega\tau}R\left(\rho\right)S \left(\theta\right). \tag{10}\]
From this, we obtain two ordinary differential equations, which we schematically write as
\[A\left(\rho\right)\frac{d^{2}R}{d\rho^{2}}+B\left(\omega,m,\rho \right)\frac{dR}{d\rho}+\left(C\left(\omega,m,\rho\right)-{}_{s}\Lambda_{l}^{ m}\right)R =0, \tag{11a}\] \[\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{dS}{d \theta}\right)+\left(s+\frac{\left(m+s\cos\theta\right)^{2}}{\sin^{2}\theta}- 2a\omega s\cos\theta+a^{2}\omega^{2}\cos^{2}\theta+{}_{s}\Lambda_{l}^{m} \right)S =0, \tag{11b}\]
where \(A,B,C\) are functions that are given in [39]. Note that (11b) is the standard equation for the spin-weighted spheroidal harmonics [1]. Following [39], we view (11a) as defining two eigenvalue problems with the eigenvalue \({}_{s}\Lambda_{l}^{m}\). The set of \(\left\{\omega,R,S\right\}\) for which (11a) and (11b) have the same eigenvalue \({}_{s}\Lambda_{l}^{m}\) are the quasinormal modes and eigenfunctions of the Teukolsky equation [39]. We note that (11a) and (11b) also admit total transmition and scattering mode solutions, but they would be irregular at the outer boundary. Therefore, the regularity one inherits from the set of spectral basis functions eliminates such solutions.
We numerically discretize (11a) and (11b), solve for the eigenvalues and eigenvectors of the two systems, and then vary \(\omega\) until at least one of the eigenvalues for the two discretized systems coincide. The value \(\omega\) is then a quasinormal mode frequency, and the corresponding eigenvector with eigenvalue \({}_{s}\Lambda_{l}^{m}\) giving the quasinormal eigenfunction. As in [39] we discretize (11b) using a spectral method first described in [73, 50]. The radial equation (11a) was discretized using a Chebyshev pseudospectral method in [39]. We found that solving for the higher overtone quasinormal modes using the radial Chebyshev pseudospectral method required using a large number of collocation points. This led to numerically ill-conditioned discretizations of (11a), which then required the use of higher-precision arithmetic. Here we describe a spectral method which leads to sparse, well-conditioned discretizations of (11a), even when we solve for the higher quasinormal mode overtones.
The spectral method makes use of the properties of the Ultraspherical (also called the Gegenbauer) polynomials [74]. For completeness, we outline the basic idea of the method here, although we refer to [74] for a more complete exposition. Ultimately we used the ApproxFun
[75]13 implementation of these methods in our code. Our conventions follow [76].
Footnote 13: [https://github.com/JuliaApproximation/ApproxFun.jl](https://github.com/JuliaApproximation/ApproxFun.jl)
We first transform the radial coordinate to \(x\in[-1,1]\). We next expand \(R\) in a series of Chebyshev polynomials of the first kind
\[R(x)=\sum_{n=0}^{N}c_{n}T_{n}(x). \tag{13}\]
The derivative of the Chebyshev polynomials of the first kind can be written in terms of the Chebyshev polynomials of the second kind
\[\frac{dT_{n}(x)}{dx}=nU_{n}(x). \tag{14}\]
For higher order derivatives with respect to \(x\), we use the following property of the Ultraspherical polynomials
\[\frac{dC_{n}^{(\lambda)}(x)}{dx}=2\lambda C_{n-1}^{(\lambda+1)}(x), \tag{15}\]
where \(U_{n}(x)=C_{n}^{(1)}(x)\). To conclude, we see that we can write
\[\frac{dR}{dx}=\sum_{n=1}nc_{n}U_{n-1}(x). \tag{16a}\] \[\frac{d^{2}R}{dx^{2}}=\sum_{n=2}2nc_{n}C_{n-2}^{(2)}(x). \tag{16b}\]
Consider the vectorial representation of \(R\), Eq. (13)
\[R=\mathbf{T}^{T}\mathbf{c}, \tag{17}\]
where \(\mathbf{T}=\left(T_{0}\left(x\right),T_{1}\left(x\right),...,T_{N}\left(x \right)\right)\) and \(\mathbf{c}\equiv\left(c_{0},c_{1},...,c_{N}\right)\). We see that we can write the first and second derivatives of \(R\) as
\[\frac{dR}{dx} =\mathbf{U}^{T}\mathbb{D}_{1}\mathbf{c}, \tag{18a}\] \[\frac{d^{2}R}{dx^{2}} =\mathbf{C}^{T}\mathbb{D}_{2}\mathbf{c}, \tag{18b}\]
where \(\mathbf{U}=\left(U_{0}\left(x\right),U_{1}\left(x\right),...,U_{N}\left(x \right)\right)\), \(\mathbf{C}\equiv\left(C_{0}^{\left(2\right)}\left(x\right),C_{1}^{\left(2 \right)}\left(x\right),...,C_{N}^{\left(2\right)}\left(x\right)\right)\), and \(\mathbb{D}_{1}\) and \(\mathbb{D}_{2}\) are sparse matrices, the components of which can be inferred from Eqs. (16). To complete the discretization of (12a), we need to convert \(A,B,C\), along with \(dR/d\rho\) and \(R\) to the polynomial basis \(C_{n}^{\left(2\right)}\), which can be done using sparse matrices [74]. Ultimately with this method, (12a) can be discretized to take the form
\[\left(\mathbb{A}-\lambda\mathbb{B}\right)\mathbf{c}=0. \tag{19}\]
where \(\mathbb{A}\) and \(\mathbb{B}\) are sparse matrices with a relatively small condition number. We do not need to impose boundary conditions as regularity at the boundaries imposes the ingoing radiation condition at the black hole horizon and the outgoing radiation condition at future null infinity [48; 39]. We solve the generalized eigenvalue problem (19) using standard numerical linear algebra routines (that is, with the eigen solver in the Julia standard library).
## Appendix B Structure of QNM Radial Eigenfunctions
Here, we briefly discuss the structure of the radial eigenfunctions for QNMs on \(\tau=const.\) HPHC hypersurfaces. In HPHC coordinates, \(\tau=const.\) hypersurfaces become tangent to null surfaces at the future horizon and null infinity (and hence tangent to the characteristic curves of the Teukolsky equation). There are two main effects that determine the far-field behavior of the radial quasinormal eigenfunctions to the Teukolsky equation. First, there is some flexibility in HPHC coordinates as to where \(\tau=const.\) hypersurfaces intersect future null infinity (while pinned to the same location at the horizon) [41]; we call this flexibility a _propagation effect_. Second, the rate at which the coordinate volume on the slice changes as a function of \(\rho\) controls the behavior of the eigenfunction at future null infinity, which gives rise to the familiar 1/r decay at large radii. Since we solve for the rescaled variable \(\Psi_{4}\) defined in Eq. (4) in both the QNM (initial data) code and the evolution code, the \(1/r\) volume effect is factored out. We discuss the propagation effect in detail below.
### Propagation effects
To understand the nature of propagation effects in HPHC coordinates, we first solve the null geodesic equation in ingoing Eddington-Finkelstein coordinates (for a related discussion see Appendix C of [42]). Setting \(\xi_{\theta}=\xi_{\phi}=0\), we find that the characteristic speeds of outgoing and ingoing null geodesics are
\[c_{+} =\frac{\xi_{v}^{+}}{\xi_{r}^{+}}=1-\frac{4Mr}{2Mr+\Sigma_{BL}} \tag{20a}\] \[c_{-} =\frac{\xi_{v}^{-}}{\xi_{r}^{-}}=-1. \tag{20b}\]
To determine the radial null characterisititics on a hyperboloidal slice, we first define a radial coordinate \(\rho(r)\) and time coordinate \(T(v,r)\). Under that coordinate change the characteristic speeds are
\[\tilde{c}_{\pm}=\frac{d\rho/dr}{\frac{1}{\tilde{c}_{\pm}^{\pm}}\partial_{v}T+ \partial_{r}T}. \tag{21}\]
From this, we can determine the time that it takes for a radially outgoing wave, starting at radius \(\rho_{0}\) to reach
null infinity by integrating
\[\tau_{+}(\rho_{0})=\int_{\rho_{0}}^{\rho_{\mathcal{J}^{+}}}\frac{1}{\tilde{c}_{+}( \rho)}d\rho. \tag{10}\]
Similarly, one can compute the time in this coordinate for a radially ingoing wave to reach the black hole horizon:
\[\tau_{-}(\rho_{0})=\int_{\rho_{\mathcal{H}^{+}}}^{\rho_{0}}\frac{1}{\tilde{c}_{- }(\rho)}d\rho\, \tag{11}\]
where \(\rho_{\mathcal{J}^{+}}\) and \(\rho_{\mathcal{H}^{+}}\) are, respectively, the radius of null infinity and horizon in this coordinate which we assume to be independent of time.
One may interpret these time intervals as the amount by which the slice mismatches with a radially outgoing/ingoing spherical wavefront. As the mode is exponentially decaying, the amplitude of the mode will be affected by this time mismatch. For a quasinormal mode with frequency \(\omega\), the amplitude variation of the radial wavefunction due to this mismatch time, for the outgoing part of the wave, is given by:
\[A(\rho)\propto\exp\{\Im\{\omega\}\tau_{+}(\rho)\}. \tag{12}\]
Note that here the amplitude increases faster towards infinity for a wave with a higher decay rate.
Fig. (8) diagrams the basic intuition behind this result. Far from the black hole, a spherical wave would be approximately advected with a decay of 1/r along the null geodesics. After factoring out the 1/r decay, we expect the amplitude of the wave to be roughly constant along the outgoing null geodesics labeled by grey dashed lines at large radius. We see that on a \(T=const.\) hypersurface, the faster decaying mode would have a radial amplitude \(A(\rho)\) that decays faster as we approach the black hole.
We note that the propagation time depends on the geodesic trajectory one considers (although the natural choice is to compute the propagation time from the characteristic speed of the Teukolsky equation, as we do here). Since QNMs carry angular momentum, we expect that the propagation times (10) and (11) are lower bounds on the propagation time of a quasinormal mode wavefront. We also note that the wavefront of a mode only travels along a null geodesic in the eikonal limit; finite-wavelength effects may further complicate our above argument. Nevertheless, we show that the QNM radial functions roughly follow the above argument in Fig. (9). In that figure we plot the radial eigenfunction for the overtones following the procedure outlined in Appendix (A), and compare them to the predicted scaling of the radial amplitude from (12). In particular, we present the radial eigenfunctions for successive overtones of a Schwarzschild black hole with \(l=m=2\). We see that the radial profiles of the modes roughly follow the scaling as predicted by Eq. (12).
To determine \(\tau_{+}\left(\rho\right)\), we note that in the coordinates we are using that
\[\rho(r) =\frac{L^{2}}{r} \tag{13}\] \[T(v,r) =v+h(r)\, \tag{14}\]
Figure 8: Penrose diagram of the Kerr exterior. The black dashed curve describes a \(T=const.\) hypersurface in HPHC coordinates. The gray dotted lines represent the trajectories of outoing null geodesics. The box in the top-right corner shows the observed black hole ringdown signal at future null infinity, one with slower decay (blue) and the other faster (red). Note that the signals are illustrative only; any decaying wave would have infinite amplitude at \(i^{0}\), zero amplitude at \(i^{+}\), and infinitely many oscillations between.
Figure 9: Radial eigenfunctions of overtones for Schwarzschild with \(l=m=2\) (solid, colored lines). We also plot predictions from radially outgoing geodesics with dashed lines, by evaluating equation (12) with the overtone frequencies. We note that the geodesic prediction yields larger slopes for the radial functions at \(\mathcal{J}^{+}\) for overtones with higher n, in accordance with the eigenfunctions we calculated from the Teukolsky equation. However, the slope for radial eigenfunctions is not precisely matched by the geodesic prediction. This is likely due to the fact that quasinormal modes do not exactly satisfy the eikonal limit from which (12) is derived.
where
\[\frac{dh}{dr}=-1-\frac{4M}{r}. \tag{10}\]
Here, \(L\) is a constant length scale that we take to be 1. The locations of the horizon and future null infinity are then \(\rho=\frac{1}{M+\sqrt{M^{2}-a^{2}}}\) and \(\rho=0\), respectively. The ingoing and outgoing characteristic speeds in these coordinates are
\[\tilde{c}_{+} =-\frac{a^{2}\rho^{2}\cos(\theta)-2M\rho+1}{8M^{2}-4a^{2}M\rho\cos (\theta)} \tag{11}\] \[\tilde{c}_{-} =\frac{\rho^{2}}{4M\rho+2}. \tag{12}\]
For illustrative purposes we calculate the propagation time defined in equations (10) and (11), for null rays for a \(M=1/2\), \(a=0\) black hole. In this case, the ingoing and outgoing propagation time become:
\[\tau_{+}(\rho) =2\log\left(1-\rho\right) \tag{13a}\] \[\tau_{-}(\rho) =2-\frac{2}{\rho}+2\log\left(\rho\right). \tag{13b}\]
Note that the outgoing time delay diverges at the horizon, and the ingoing time delay diverges at infinity; this reflects the fact that there is no outgoing radiation at the black hole horizon and no ingoing radiation at future null infinity.
## Appendix C Convergence Tests
In this section we show the numerical convergence of time domain code [77] used in this work. We consider the evolution of a single QNM, and a numerical scattering experiment. The time evolution code makes use of pseudo-spectral methods in the angular (\(\theta\)) direction and \(4^{th}\) order finite difference methods in the radial (\(\rho\)) direction.
The initial data is then integrated in time using an explicit RK4 integrator. Therefore, fixing the angular resolution, one expects the code to approach the continuum solution with \(4^{th}\) order convergence. In general, we find that that the numerical error is dominated by the radial direction.
For our convergence tests, we fix the number of angular collocation points to be 40, and increase the radial resolution by successive factors of 2. We see \(4^{th}\) order convergence in Fig. (10), for single QNM evolution, and in Fig. (11), for the gravitational wave scattering experiment. We show that the numerical resolution of our simulations is not the limiting factor in the precision of our QNM fits to scattering initial data in Fig. (12), where we compare the spatial fit applied to both the high and mid resolution runs.
Figure 11: Convergence in a scattering experiment simulation off a Kerr background with \(a=0.7\) (top) and \(a=0.999\) (bottom). Here, the number of angular collocation points is fixed to \(n_{\theta}=40\) for all runs, and we change the radial resolution by factors of 2 successively. The ultra low resolution is run with \(n_{\rho}=64\), low with \(n_{\rho}=128\), mid with \(n_{\rho}=256\), and high with \(n_{\rho}=512\).
Figure 10: Convergence evolving a single QNM with \(n=3\) and \(a=0.999\). Here, the number of angular collocation points is fixed to 40 for all runs, and we change the radial resolution by factors of two successively. The low resolution is run with \(n_{\rho}=128\), mid with \(n_{\rho}=256\), and high with \(n_{\rho}=512\) grid points. By the _analytic_ answer we mean the prediction for \(\Psi_{4}\) at future null infinity for an \(n=3\), \(l=m=2\) quasinormal mode.
Figure 12: Convergence of fitting for the scattering initial data on a Kerr background with \(a=0.7\) (top) and \(a=0.999\) (bottom), here illustrated with the spatial fitting algorithm (spacetime fitting shows similar convergence properties). The solid lines show the fitting result from the high resolution simulation (\(n_{\rho}=512\)), and dashed lines shows that from the mid resolution (\(n_{\rho}=256\)); for \(a=0.999\), we also plot the low resolution (\(n_{\rho}=128\)) in dotted lines. We find that for \(a=0.7\) increasing numerical resolution does not improve the stability of fitting the overtones, indicating the source of instability is due to a resolved transient. For \(a=0.999\), we find that the late time kink in the slope for the first overtone (red lines) is due to the numerical truncation error; the kink moves to a later time as resolution improves, from around 180M for the low resolution (red, dashed line) to around 400M to the high resolution (red, solid line). However, the transient dominates the first overtone for at least the first 250M of evolution, during which the mid and high resolution agree. |
2309.09248 | The Director: A Composable Behaviour System with Soft Transitions | Software frameworks for behaviour are critical in robotics as they enable the
correct and efficient execution of functions. While modern behaviour systems
have improved their composability, they do not focus on smooth transitions and
often lack functionality. In this work, we present the Director, a novel
behaviour framework that addresses these problems. It has functionality for
soft transitions, multiple implementations of the same action chosen based on
conditionals, and strict resource control. The system was successfully used in
the 2022/2023 Virtual Season and RoboCup 2023 Bordeaux, in the Humanoid Kid
Size League. It is implemented at https://github.com/NUbots/DirectorSoccer,
which also contains over thirty automated tests and technical documentation on
its implementation in NUClear. | Ysobel Sims, Trent Houliston, Thomas O'Brien, Alexandre Mendes, Stephan Chalup | 2023-09-17T11:56:59Z | http://arxiv.org/abs/2309.09248v2 | # The Director: A Composable Behaviour System with Soft Transitions
###### Abstract
Software frameworks for behaviour are critical in robotics as they enable the correct and efficient execution of functions. While modern behaviour systems have improved their composability, they do not focus on smooth transitions and often lack functionality. In this work, we present the Director, a novel behaviour framework and algorithm that addresses these problems. It has functionality for soft transitions, multiple implementations of the same action chosen based on conditionals, and strict resource control. This system has shown success in the Humanoid Kid Size 2022/2023 Virtual Season and the Humanoid Kid Size RoboCup 2023 Bordeaux competition.
Keywords:decision making robotics behaviour trees
## 1 Introduction
Autonomous robotics is a large field where robots perform diverse tasks. The actions performed at any given moment depend on state and environment information. Behaviour frameworks allow developers to define rules for behaviour algorithms to manage a robot's actions. There are many desirable features for a behaviour system:
* It should facilitate a smooth transition between actions
* It should account for differences over time in information quality and environment knowledge
* It should be flexible and versatile
* Modules competing for resources should not be able to use a resource at the same time, such as motors
* The behaviour should be composable for quick development
* State information should be available for debugging
* The system should run in real-time
Looking back at previous research on behaviour systems, the classical subsumption system [3] provided a modular approach but lacked composability and functionality. Behaviour trees are modular and composable but are computationally expensive. They lack some functionality, such as changing actions based on the existence or absence of environment knowledge, which research aims to address [1, 10, 7]. The \(ABC^{2}\) behaviour system previously used in RoboCup used
a queue-based agenda focusing on multi-agent gameplay. It did not facilitate smooth transitions and lacked complexity beyond multi-agent functionality. The Dynamic Stack Decider (DSD) [6] is composable and maintainable but lacks desired functionality for soft transitions. The Humanoid Control Module [2] was incorporated with the DSD to abstract lower-level hardware modules from higher-level strategy modules and to avoid conflicts in motor control. It suggests that while the DSD has benefits, it alone does not provide all desired functionality for a complete behaviour system.
The literature has improved over time towards composable, functional and transparent systems. However, each still lacks some desired functionality. Research often focuses on one or a few key components, such as fidelity of information [9], without considering all aspects needed for a general behaviour system, including transitions. This research aims to address these gaps.
In this work, we present The Director, a behaviour framework and algorithm for autonomous systems that emphasises modularity and transitions. It incorporates functionality for soft transitions to ensure safe robot motions as tasks change. Its modular architecture enables programmers to focus on small and specific functionality for the robot rather than the entire system. The algorithm's versatility facilitates complex behaviours, including defining multiple ways to complete one task chosen based on conditionals. A library of automated tests supports the creation of the Director's backend algorithm. The Director's design integrates many desirable features into a simple, coherent framework.
The Director is general-purpose and is implementable in most robotic software architectures, such as ROS. We have implemented it within NUClear [4] for the RoboCup Humanoid Soccer League, shown in both in the 2022/2023 Virtual Season and RoboCup 2023 Bordeaux, with a game won with the Director in each competition. NUClear is a modular, low-latency message-passing architecture for resource-constrained robotics systems. We converted from a subsumption-based system to the Director, with successful testing in the Webots RoboCup simulation. The Director framework aims to simplify the implementation of new behaviours, and we found this to be true when converting our system to the Director.
## 2 Algorithm
The Director is an algorithm for controlling the flow of behaviour in a system. It uses Providers and Tasks to build a tree structure, where Tasks are messages requesting functionality, and Providers provide the functionality for those behaviours. Providers can call subtasks, which are considered children of the Provider. The Director has a wide range of features to solve limitations in existing behaviour systems. Definitions of key terms are in Table 1. Figure 1 shows an example of a Director graph for a soccer-playing humanoid robot.
We use a basic soccer-playing scenario to describe the concepts in the Director. The scenario is that a humanoid robot approaches a ball, kicks the ball, falls over mid-kick and gets back up. The reason for falling over may be an unstable
kick engine or the interference of another player. The transition from walking to kicking should be stable. When the robot falls over, it should stop kicking and get up. Figure 1 shows the Director graph at the start of this scenario, where the robot has seen the ball and is walking toward it.
A behaviour system can be split into five main sections, as shown at the top of Figure 1. The Director algorithm does not have an inherent knowledge of these layers, but they can be incorporated into the program's structure to more easily conceptualise and modularise the behaviour. The literature uses different terms interchangeably, which can lead to confusion.
We propose the following terminology. The actuation layer involves controlling the robot's hardware. The skill layer is responsible for performing complex physical movements of the robot. The planning layer determines when and in what way the skill layer is called based on the current state of the environment. The strategy layer makes specific high-level decisions by stringing together planners that run based on priority and state information. Strategy modules are often small and do not receive specific data from the layer above. The purpose layer is at the top of the Director tree and determines the overall goal of the robot.
Figure 1: This figure shows a Director graph for a humanoid robot playing soccer. It shows active (solid blue), blocked (dashed red), and inactive (dotted black) Providers and Tasks. The values of Tasks are their priority.
The skill and actuation layers can abstract platform-specific functionality from the higher-level behaviour layers. The Humanoid Control Module introduced by Bestmann et al. [2] abstracts the robot's motions from the higher behaviour layers and avoids conflicts in motor control. They focus on using the same higher-level modules across legged and wheeled robots, with the lower-level control module implementing the platform-specific skills and actuations. The Director inherently facilitates this abstraction through loose coupling.
### Providers
Providers are functions that perform actions to satisfy the requirements of one particular Task, either directly or by running subtasks. The function runs when the Task it provides for is requested from anywhere in the system. Standard Providers use the 'Provide' DSL keyword. These Providers may have conditions on when they can run, defined in their declaration.
Providers run when a new Task is received or when a subtask is Done. In addition, they may run on other triggers defined within the general software architecture, such as running at a particular frequency. Other triggers will only run the Provider if it is active in the tree. Functions within the software system that aren't Providers are non-Providers.
In our soccer-playing scenario, the boxes in the diagram are Providers. In the beginning, the robot should walk to the ball. The WalkToBall box is a Provider
\begin{table}
\begin{tabular}{|c|c|} \hline
**Concept** & **Description** \\ \hline Provider & A function that provides the functionality for a Task. \\ \hline Non-Provider & A function that doesn’t provide for a Task. \\ \hline Provider Group & A collection of Providers that service the same Task. \\ \hline Task & A request for a single piece of functionality. \\ \hline Subtask & Task from a Provider. \\ \hline Root Task & A Task from a non-Provider. \\ \hline Priority & A value that determines if a Task can take control. \\ \hline Optional & The Task is not required to run and will defer to non-optional Tasks. \\ \hline Done & A Task that indicates that the Provider has completed its Task. \\ \hline Idle & A Task that will keep running the last set of Tasks for a Provider. \\ \hline \end{tabular}
\end{table}
Table 1: Definitions for key terms for the Director algorithm.
\begin{table}
\begin{tabular}{|c|c|} \hline
**DSL Keywords** & **Description** \\ \hline Provide & The normal Provider that provides the functionality for a Task. \\ \hline Start & The Provider that is run when a Provider group gains control. \\ \hline Stop & The Provider that is run when a Provider group loses control. \\ \hline Needs & The Provider needs control of the specified subtask to run. \\ \hline When & Conditions on when the Provider can be active. \\ \hline Causing & The state that will be achieved by running this Provider. \\ \hline Uses & Retrieves information on a Provider’s subtask state. \\ \hline RunReason & Gives information on why the Provider ran. \\ \hline \end{tabular}
\end{table}
Table 2: DSL keywords used in the Director.
that provides the functionality for the WalkToBall Task. It finds the ball position and requests a WalkTo Task with this ball position data. The WalkTo Task manages what velocity to send to the walk engine based on the target position. The graph continues with the walk engine and limb Providers, with the leaf Providers controlling the motors.
#### 3.2.1 Provider Groups
A group of Providers for one Task type is called a Provider group. Only one Provider in a group can run at a time, with the running Provider determined by DSL keywords or declaration order. If a Provider can no longer run, the Task is reassigned to another Provider in the group. Subtasks from a Provider in a group are from the group, not the specific Provider.
Striker and Walk are Provider groups in Figure 1. Only one Provider in a Provider group is active at any given time. The Striker is using the playing Provider since that condition is true. If the game phase were to transition to the ready state, the active Provider would be the ready Provider.
#### 3.2.2 Provider Types
There are three types of Providers, each specified by a DSL keyword. The 'Provide' type executes functionality for a Task, as discussed previously. If a 'Start' Provider type exists, the 'Provide' Provider will run after it.
The 'Start' type sets up the state of the Provider when a Provider group gains control of its Task after not having control previously. It runs once.
The 'Stop' type is used to clean up code and only runs when a Provider group is losing control of its Task and will no longer run. It cannot run without a 'Start' or 'Provide' running before it. It also runs once.
In our scenario, the Walk Provider group will have a Start Provider that sets up the walk engine. When the Kick Provider takes over, the Walk Provider group will have a Stop Provider that cleans up the walk engine.
#### 3.2.3 Needs
The DSL keyword 'Needs' is used to ensure that a Provider can only run if it has priority to take control of its subtasks.
In our scenario, the skill Providers Need the relevant limbs for their Task. For example, the Walk and Kick Providers will specify that they Need the LeftLeg and RightLeg Tasks. This conflict means the Walk and Kick Providers cannot run simultaneously. Because the KickToGoal Task has higher priority than the WalkToBall Task, if both are requested to run then the Kick will take control and the Walk will be blocked from running as it does not have control of its Needs subtasks.
If a Provider does not have the Needs keyword, it will run and request subtasks, but those subtasks will not run until Providers are available to run them. In our example, the planning modules do not Need their subtask. They continuously run without taking control of the motors until they need to act. The RelaxWhenFalling does not Need the Relax, since it only requests the Relax subtask when the robot is falling. It will not take over the motors until the robot
falls over while kicking. When this happens, the Kick will lose control. Once the robot has settled on the ground, the GetUpWhenFalling will request the GetUp, taking control of the motors from the Relax.
1.2 When 'When' is a DSL keyword that only allows a Provider to execute when the system satisfies a specified condition. The condition has three parts - a state type, a comparison operator and a state value.
In our scenario, the Kick only runs When the stability state of the robot is equal to standing. This prevents the robot from transitioning from the walk mid-step and falling over.
This functionality, combined with Provider groups, solves the problem described by Veloso et al. [9], where existing systems don't consider the precision of environment knowledge. They proposed a behaviour system that emphasised the accuracy and fidelity of information in determining how to perform a task. A single action, such as localising, could have multiple methods for execution depending on the quality of sensor data. In the Director, each method can have an associated Provider, with the precision level for that method defined in the When conditional.
#### 2.1.3 Causing
The DSL keyword 'Causing' in the Director allows for soft transitions between Providers. It declares that the Provider will cause a condition to be satisfied. Like 'When', it has a state type, a comparison operator and a state value. If a Provider has a 'When' condition, the Director will prioritise lower priority Providers with a matching 'Causing' if the 'When' fails.
In our scenario, transitioning between walking and kicking would be difficult without Causing. The Kick requires the robot to be standing, but it is walking and therefore does not satisfy this condition. A version of the Walk in the Walk Provider group causes the stability state to become equal to standing. This standing Walk Provider makes the walk stop cleanly and stand still, allowing the Kick to execute safely.
The 'Causing' keyword enables smoother transitions between Providers, allowing a walk to cause the robot to enter a state where it can transition to kicking.
### Tasks
Tasks are jobs that Providers execute and can contain data. They typically have three pieces of information: Task data, priority level, and an optional flag. Prioritisation of Tasks determines which should run, with non-optional Tasks always taking priority over optional ones. If a Task is optional, other subtasks can run if it cannot, but this is not true for non-optional subtasks. The Director will implement all-or-nothing for non-optional subtasks. Both Providers and non-Providers can request Tasks.
In our scenario, the Walk Provider requests LeftLeg and RightLeg subtasks. The Provider must have control over both Tasks for them to run. The Walk
Provider also has optional LeftArm and RightArm subtasks, which will run if possible but will not block the other subtasks from running.
#### 4.1.2 Root Tasks
Tasks requested from non-Providers are called root tasks and are the starting points of the Director graph. These tasks are siblings within the tree. Root tasks are different from Tasks requested from Providers because they cannot be removed by running a Provider without the Task. Instead, the Task needs to be manually flagged in a way so that the Director will remove it.
In our scenario, FallManagement and Striker are Tasks requested from non-Providers. These requests start the Director graph.
#### 4.1.3 Priority
Priority in the Director is determined based on the closest common ancestor of the two competing Tasks. For root tasks, the closest common ancestor will be the root element. Once the closest common ancestor is determined, the priority of each Task's branch will determine which Task has higher priority. The winner takes control and becomes active, while the evicted Task will watch for an opportunity to take back control.
When a Task's ancestor tree has an optional Task between itself and the common ancestor, it is considered optional. If one Task has an optional parentage and the other does not, then the optional Task will automatically lose. The Tasks are compared normally if both have optional Tasks in their parentage.
In Figure 1, the Striker Provider group requests four subtasks with different priorities. The KickToGoal Task has higher priority than the WalkToBall Task. When the KickToGoal Provider requests Tasks to execute the Kick, it will take over the limbs from the Walk because the KickToGoal branch has higher priority.
#### 4.1.4 Done Tasks
A Done Task is a particular Task requested by a Provider to signal to its parent that it has completed its Task. The Provider group that created this Task will then be re-executed with the knowledge that it was triggered by a Done event from one of its children. The Done Provider's Task will not be removed from the Director tree unless it is a root Task.
In Figure 1, when the LeftHipYaw Provider moves to the requested motor position, it will send a Done Task to its parent. When the LeftLeg has received Done Tasks from all of its motors, it will send a Done Task itself. It can then run the next sequence of motor positions.
#### 4.1.5 Idle Tasks
An Idle Task is a particular Task a Provider requests to signal to continue running its previous Tasks. For example, In our scenario, the GetUp-WhenFallen can send an Idle Task when it is getting up and does not want to re-request the GetUp Task. If it is re-requested, the GetUp may run from the beginning and not reach the end of its motion.
### Data
Providers can access data about the local state of the Director to assist in making decisions and understanding the system state.
**Uses** 'Uses' is a DSL keyword used to obtain information about subtasks. It provides information about the run state of the subtask and whether it is Done. The run states include whether the subtask is currently running, queued to run, or not yet requested by the current Provider. The information comes from the subtask corresponding to the template type of the Uses keyword.
In the scenario with the LeftLeg and its motors, the Uses keyword tells the LeftLeg when all of its motors are Done. The GetUpWhenFallen can determine if the GetUp is currently running by using the Uses keyword. It can use this to determine whether to send an Idle Task rather than re-requesting the GetUp Task.
#### 2.3.1 Run Reason
The RunReason DSL word retrieves information about why a Provider is running.
There are five possible run reasons - a new Task was requested, the Provider has become active, the Provider has become inactive, a subtask is Done, and the Provider is pushed because a higher priority module wants its Causing state to be true.
```
1defProvide<GetUpWhenFallen,Uses<GetUp>,Trigger<Sensors>>
2(RunReasonrun_reason,Uses<GetUp>getup,Sensorssensors):
3#Sensorsmessagereceived
4ifrun_reasonisOTHER_TRIGGER:
5#Calculateifweshouldgetupusingsensors
6is_fallen=sensors.gyro>config.gyro_limit
7#RequestaneewGetUp
8ifis_fallenandgetup.run_stateisNO_TASK:
9request_task(GetUp())
10#IdleontheGetUpifstillgettinguporqueuedtogetup
11elseif(is_fallenandgetup.run_stateisQUEUED)
12or(getup.run_stateandnotgetup.done):
13request_task(Idle())
14#ElsenoTaskisrequested,
15#theGetUpisremovedfromtheDirectorgraph
```
Listing 1: Pseudocode for the GetUpWhenFallen Provider using the Uses and RunReason DSL keywords.
The RelaxWhenFalling and GetUpWhenFallen Providers in our scenario run when the robot falls over and once it is fallen, respectively. These Providers check the sensor data to determine if it needs to request its subtask. The run reason information tells the Providers if they are running because of a new sensor message or because their subtask is Done. An example of code for the GetUpWhenFallen Provider is in Listing 1.
## 3 Evaluation
### Composability
The Director makes it easy to combine modules. Requesting relevant Tasks combines desired modules into the program. Calling a 'ChaseBall' Task from a non-Provider can easily demonstrate the robot chasing a ball. The Task call will trigger all the necessary modules to chase a ball through subtasks. The easy modification and addition of behaviours are critical for debugging and developing in high-pressure scenarios, such as RoboCup. Subsumption-based systems [3] have more difficulty achieving this, as the layers build upon each other and are more tightly coupled.
### Extensibility
In the Director system, adding new components is straightforward and does not require a global reevaluation of priorities, unlike the subsumption architecture. New Providers and Tasks can be created and requested without significant changes to the rest of the system. Subsections of the system can be easily isolated. This flexibility allows for easy experimentation, modification and debugging of the system.
In our experience, the development of behaviour modules within the Director framework is significantly quicker than in our previous subsumption-like system. After converting our system, the high modularity made it easy to see ways to extend the functionality of the robots.
### Transitions
In humanoid robotics, transitions are a factor in preventing the robot from falling over and causing damage. The Director supports clean and safe transitions through conditionals on Providers to prevent them from running unless the system is in a particular state. Additionally, the Director includes functionality for soft transitions. Rather than immediately transitioning from one action to another, a Provider can handle the transition using the pushing functionality.
In the previous section, we used a scenario to explain the soft transition functionality with the 'When' and 'Causing' keywords. The Kick Provider's 'When' condition required the system stability state to be equal to'standing'. A matching 'Causing' Walk Provider satisfied this state by making the walk engine
stop cleanly. When the Kick Provider tried to take control, it pushed the Walk Provider group to use the 'Causing' Walk Provider. The Walk then stops cleanly before transitioning to the Kick Provider.
This smooth transition from walking to kicking is critical for stability. Conditions on Providers make the robot act safely and move between motions smoothly. Existing literature rarely addresses transitions, and we are not aware of other systems with soft transitioning functionality.
### Hardware Control
The Director has a strict control system where a Provider can only provide for one Task at a time, and only one Provider of a Task type can be active at a time. Other Tasks and Providers are blocked and must wait for an opportunity to take control. This core rule in the algorithm prevents multiple sources from controlling one resource, such as the motors. By grouping motors, the system can ensure that only one module controls a kinematic chain.
In our previous system, all motor commands moved through one module that enforced a similar strict control system. Another solution proposed by Bestmann and Zhang uses the Humanoid Control Module [2] to implement a mutex on the motors. These approaches lack the modularity and composability that the Director inherently creates throughout the system, from high-level strategy modules down to low-level motor modules.
### Versatility
The Director has a versatile algorithm with extensive functionality. Research often focuses on adding one specific functionality. The Director aims to incorporate all needed functionality. Functionality for transitions and strict hardware control, as described previously, are critical parts of this versatility.
Another important aspect is the ability to create multiple implementations for one action. Provider groups facilitate this, where one implementation runs based on the system state. Numerous research articles address this concept in the context of the quality and existence of environment information [9, 7, 10]. Provider groups provide this functionality in a generalised way, where conditions determine the active Provider.
Modern implementations of behaviour trees for autonomous robotics request tasks and act based on the state of the subtask [8, 7, 10]. The Uses information and Done Tasks within the Director provide this functionality and could extend to conform to the behaviour tree structure if desired by adding fail and success responses from subtasks.
\(ABC^{2}\)[5] has a similar Provider-Task relationship and can apply conditionals to nodes. They do not address transitions, resource control and multiple implementations for tasks. These are not as important within the two-dimensional simulation competition scenario used in the article.
The complexity of the Director requires careful implementation of the back-end algorithm. We provide over thirty automated tests for the Director to aid
in the development of the algorithm 1. The computational complexity of the algorithm is comparable to other behaviour tree systems.
Footnote 1: [https://github.com/NUbots/DirectorSoccer/tree/main/module/extension/Director/tests](https://github.com/NUbots/DirectorSoccer/tree/main/module/extension/Director/tests)
### Transparency
While transparency is useful for debugging, it can also increase complexity when implementing behaviours. Providers in the Director have limited access to the current system state, with only local information accessible. They do not know the origin of their Task, although the Director algorithm could include this information if needed. While messages within the system can provide more context to Providers about the system state and environment, the Director is designed to be decoupled, allowing for shorter and more composable modules without dependencies on other aspects of the system.
The Director algorithm has a complete view of the system state, with all Providers and their active tasks and watchers visible at any given time. A graphical representation of the behaviour system from this information would enhance debugging. Additionally, each module could manage its history individually.
## 4 Conclusion
We presented the Director framework and algorithm and placed it within the context of existing behaviour systems. It is modular, composable, extensible and has functionality critical for autonomous robotic systems. The Director supports soft transitions, multiple implementations for the same task chosen based on conditionals, conditional requirements on Providers, and strict resource control extending to motor control.
#### 4.0.1 Acknowledgements.
This research is supported by 4Tel Pty Ltd and an Australian Government Research Training Program Scholarship to the first author. We acknowledge contributors of the NUbots Robotics Research Group, whose work this publication builds upon. Thank you to Alexandre Mendes and Stephan Chalup for their review of this work.
|
2309.16368 | Kinetic Simulation of He radio frequency capacitive coupled plasma | Radiofrequency capacitively coupled plasma is studied theoretically using a
Particle-in-Cell code. For He discharge, the time-averaged sheaths are in the
range of few centimeters. The sheath potential, ion, and electron energy and
angular distributions, discharge current, and dissipated power depend on the
driven potentials and frequencies. Increasing the amplitude of the high radio
frequencies increases the bulk density and the sheath potential and,
consequently, increases the plasma processing rate. Increasing the intermediate
radio frequency amplitude allows a wider sheath with a broad ion energy
distribution and a narrower ion angular distribution. Changing the amplitude
and the phase shift between driven frequencies provide different energies and
angular distribution allowing performing various processes. The interplay
between the sheath and bulk dynamics in the intermediate radiofrequency regime
and the high-frequency regime may excite harmonics in the discharge current. | M. Shihab, A. Elbadawy, M. S. Afify, N. El-Siragy | 2023-09-28T12:12:25Z | http://arxiv.org/abs/2309.16368v1 | # Kinetic Simulation of He radio frequency capacitive coupled plasma
###### Abstract
Radio frequency sheaths, He discharge, Ion energy and angular distribution, distribution, Electron energy distribution, Power dissipation.
Radio frequency sheaths, He discharge, Ion energy and angular distribution, Electron energy distribution, Power dissipation.
## 1 Introduction
Low-temperature plasma has a great potential for numerous applications in the growth and processing of nanomaterials and the fabrication of microelectronics, e.g., carbon nanotubes, nanowires, thin-film depositions, and anisotropic etching of metallic, semiconductor, and dielectric materials. The energy of incident ions on substrates determines the process type and the flux of ions determines the rate of the process. In this contribution, we study Helium (He) discharge utilizing the Particle-In-Cell technique. He is an inert gas. Its chemistry is simple and could be used
to host different gas compositions - such as O\({}_{2}\), N\({}_{2}\), CF\({}_{4}\), CH\({}_{4}\), H\({}_{2}\)O-without affecting their chemistry. Here, we try to reveal the effect of the amplitude of driven radio frequencies and their phase shift on the discharge dynamics, the ion energies and the ion angular distribution at electrodes, the electron distribution, and the dissipated power in the plasma. The driven frequencies are 60 MHz and 1 MHz. Tailoring the driven potential or driven electrodes with different radio frequencies is one of the hot topics of research nowadays [7; 8; 9].
In the next section, we give a short overview of the Particle-in-Cell technique, then in section 3 we introduce our results and close the manuscript with a conclusion in section 4.
## 2 Particle-In -Cell
In the approach of modeling particles in a cell, Particle-in-Cell (PIC) has been widely utilized as a computational method to understand and forecast the behavior of plasma [1; 2]. An interpolation approach is used to collect charge and current density on a spatial mesh. The simulation domain is discretized into k\({}_{\rm th}\) grids as shown in figure (1). On grids, the field equations are solved. Interpolation is used to determine the force exerted on the super-particles. Each super-particle represents 10\({}^{3}\) t0 10\({}^{6}\) of real particles. In Fig. (2), the PIC simulation's computational cycle is depicted. The indices i and k are used to designate quantities evaluated for particles and grid points, respectively. There are four steps in the computing cycle (excluding the Monte-Carlo collision).
The particles' locations are computed initially. The force effects on and the acceleration of each particle are computed, then the particles' locations and velocities are updated in the first step. The charge density is then calculated at the grid points using the charge of each particle and the current in the second step, which is known as weighting. The third stage integrates the Poisson's equation to obtain the electric potential and uses a finite difference technique to compute the electric field E on the grid. In the final stage, the electric field at surrounding grid points is used to calculate the force on each particle. Because PIC algorithms can use Monte-Carlo methods to simulate collisions between charged particles and neutral atoms, an extra stage in the computing cycle is added, see, Fig. (2). Considering
Figure (1): The discretization of the simulation domain into k\({}_{\rm th}\) grids.
Figure (2): A schematic flow chart of PIC modules. The dashed lines are to short cut Monte-Carlo calculations for collisionless plasma.
collision between particles is time consuming. The null collision method is employed to perform simulation in a proper time. This method involves selecting a particle and comparing the relative chance of a collision to a random number to see if one really occurs. In low temperature plasma, the degree of ionization is small, hence, only electron-neutral and ion-neutral models were used for these simulations. Scalability is crucial because it connects simulations to the real world of plasma. The values supplied to parameters such as grid spacing, time step, and super-particle density are critical since they determine the simulation's speed and accuracy. A few aspects must be considered before performing a PIC simulation of the plasma in order to avoid unphysical outcomes.
In the simulation, the number of super-particles Np should be substantially more than the number of grid cells ng, i.e. Np \(\gg\) ng. This is to ensure that, on average, each grid cell includes multiple particles during the simulation. The simulation will be noisy if the number of particles is too low. The grid cell size \(\Delta\)x should be on the order of the Debye length. The Debye length is the longest distance over which individual particle Coulomb forces are significant. If the grid spacing is reduced, we will be unable to eliminate short-range particle interactions, which are irrelevant to the plasma's overall behavior. Important electric field gradients can be neglected if the spacing is made bigger than Debye length, because the fields are only determined at the grid points. Furthermore, because grid cells affect particle size, larger grid cells will produce unphysical results. In addition, the time step \(\Delta\)t must be less than the period of plasma oscillations and \(\Delta\)t should be small enough to allow stable and precise integration of the particle equations of motion, as well as correct reproduction of particle oscillations.
Particles passing across cell borders generate density fluctuations, which are then transmitted to the potentials and electric fields. If the fluctuations are lower than the magnitude of the applied potentials and do not cause unstable behavior, they can be ignored. Electrons are extremely sensitive to field fluctuations, which can lead to an unphysical increase in electron energy. Ions are slower to respond to fields; therefore, transitory fluctuations have no effect on them. If the artificial rise in electron energy grows great enough, it might produce excess ionization, which raises plasma density, which magnifies the heating, resulting in an exponential increase in plasma density, and the simulation eventually breaks down. Another issue is the loss of resolution in low-density areas, such as sheaths. As a result, the number of particles in a super-particle must be reduced, i.e. smaller super-particles must be used. For more details about PIC simulation, please read [10; 11; 12; 13].
## 3 Results and discussion
Here, we study employing Particle-in-Cell (PIC) ions' and electrons distributions in radio frequency capacitively coupled plasmas (RF-CCPs), where electrodes are biased with two radio frequencies,1 MHz and 60 MHz. The code is benchmarked, so its predictions are trustful [14]. For simulation, a geometrically symmetric reactor is chosen. The distance between electrodes is 15 cm. The time and space are discretized in a way to avoid
numerical instabilities. The simulation is repeated for 500 RF cycles of the 60 MHz. The time step is 1/600 from the periodic time of the 60 MHz. The distance between the two electrodes are discretized into 259 grids. The 1 MHz is comparable or smaller than typical ion plasma frequency in RF-CCPs, while the 60 MHz is much higher than the ion plasma frequency. The 1 MHz allow ions to respond partially to the instantaneous electric fields. On contrary, the 60 MHz compel ions to respond to the time averaged field. The driven potentials V = V\({}_{1}\)sin(2\(\pi\) 60MHz t) + V\({}_{2}\) sin (2\(\pi\) MHz t + \(\theta\)). Based on the amplitude of the driven potentials, three cases are considered: In case (1), V\({}_{1}\) = V\({}_{2}\) = 250V, where the effect of both frequencies is supposed to be equal. In case (2), V\({}_{1}\) = 100V and V\({}_{2}\) = 400V; the plasma is mainly driven by the intermediate frequency. In case (3), V\({}_{1}\) = 400V and V\({}_{2}\) = 100V; the plasma is ignited by the high frequency. The simulations are carried out twice for the three cases when the phase shift (\(\theta\)) is zero, corresponding results are displayed as solid lines. When the phase shift \(\theta\) is \(\pi\)/2, results are presented via dashed lines.
Let first discuss the results when there is no phase shift. Close to electrodes, quasineutrality breaks down and RF sheaths are formed due to the escape of electrons from the discharge volume into electrodes.
The time averaged density of the left sheath for three cases are shown in Figure 3. Case (1), case (2), and case (3) are represented with black, blue, and red solid lines, respectively. For each case, the upper line presents the ion density and the lower gives the electron density. The minimum sheath width is belonging to case (3), where, the amplitude of the high frequency signal is larger than the intermediate frequency. The power of the high frequency signal is mainly dissipated in the plasma bulk, therefore, the bulk density increases and providing a larger ion flux to the plasma sheath. When the amplitude of the high frequency is larger than the amplitude of intermediate frequency, i.e., case 2, the time averaged sheath is in the order of 1 cm. For case 3, the time averaged sheath width is roughly 2 cm. These large sheaths suggested that typical Ar capacitively coupled plasma reactors with a gap size of 5 cm or less may be not suitable for He discharges. For plasma etching, deposition, and sputtering at low pressures, the mean free path is large, and the thickness of the bulk may be not enough to ignite the plasma via electron-neutral background collisions.
Figure 4: The sheath potential for different plasma simulation cases shown in Fig. 3.
Figure 3: The plasma density between the two electrodes.
The corresponding sheath potentials are present in Figure 4 with the same legend as in Fig. 3. If we look only to solid lines, case (1) which is dominated by the potential of the high frequency has the largest peak to peak sheath potential. The ion energy distributions (IED) are shown in Fig. 5. Case (3) has the narrowest distribution shown as red solid line. The broadest IED is obtained when the intermediate frequency potential is dominant as in case (2). The corresponding ion angular distribution function (IADF) is shown in Figure 6. The highest peak of the IAD belongs to case (3) and the lowest one is due to case (2). From previous calculations, increasing the ion flux, the sheath potential, and decreasing the sheath width allow high peak of the ion angular distribution [8]. This matched very well with case (3), please investigate Figure 1 and Fig. 2. Considering a phase shift (0) of \(\pi/2\), a slight increase in the sheath width of case (1) is observed. However, the two sheaths for case (2) and case (3) still roughly the same. The bulk densities are not sensitive to the phase shift for all cases. The sheath potential for case (3) is the same with and without a phase shift. The red and dashed solid lines are almost identical. Also, for case (1) and case (2), the sheath potentials with and without the phase shift are comparable. Therefore, the IAD and IED for all cases are almost identical.
Also, as shown in Fig. 7 and 8, the electron energy distribution affected by changing the driven potentials. Increasing the amplitude of the high frequency component increases the height and the width of the electron energy distribution. On contrary, intermediate frequencies allow narrower electron energy distribution. At lower radio frequencies electrons may be able to enter nano structures etched in
Figure 5: The ion energy distribution function at the left electrode for different plasma simulation cases shown in Fig. 3.
Figure 6: The ion angular distribution function at the left electrode for different plasma simulation cases shown in Fig. 3.
Figure 7: The electron energy distribution function at the center of the dischargeofer different plasma simulation cases shown in Fig. 3.
substrates to neutralize positive charges on the etched trench surfaces [7]. The electron energy distribution at the center of the discharge is not affected by the phase shift. Only for cases (1) and (2) at the electrode, the height and the width of the distribution increased by adding a phase shift of \(\pi\)/2.
Also, to reveal possible resonances between the plasma bulk and the sheath, the current is shown in Fig. 9. For case (2) and case (3), the phase shift has no effect on the passing current. But for case (1), the amplitude of the current increases by increasing the phase shift. When there is not nonlinear interaction between the sheath and bulk dynamics, the Fourier analysis of the current should only display a two component in the frequency domain; i.e., 1 MHz and 60 MHz. As could be seen in Fig. 8, other component is generated in the plasma. The amplitude of these components are a function of the driven potentials and phase shifts.
The accumulated power is depicted in Fig. 11. The accumulated power increases by increasing the amplitude of the high frequency. It is not sensitive to the phase shift when the discharge is dominantly driven by high or intermediate frequency. However, when both amplitudes are equal, the phase shift has an effect. Plasma series resonance is responsible for the
Figure 8: The electron energy distribution function at the left electrode for different plasma simulation cases shown in Fig. 3.
Figure 11: Accumulated power as a function of time for different plasma simulation cases shown in Fig. 3.
Figure 10: The Fourier component of the current passing through the discharge for different plasma simulation cases shown in Fig. 3.
Figure 9: The current passing through the discharge for different plasma simulation cases shown in Fig. 3.
generation of new harmonics which affect the dissipated power [15, 16].
## 4 Summary
The discharge dynamics have been found to be controlled via tailoring the driven potential. For He discharge, the time-averaged sheaths are large, and typical Ar discharge RF-CCP reactors maybe not appropriate for He discharge. Increasing the amplitude of the high radio frequencies has been found to increase the bulk density and the sheath potential. Also, it has been found to accelerate ions in the plasma sheath with energies around the time-averaged values. On contrary, increasing the amplitude of the intermediate radiofrequency has been found to provide a wider sheath with a wider ion energy distribution and a narrower ion angular distribution. The height and the width of the electron energy distribution function was a function of the amplitude of the driven potentials. The plasma series resonance has been found to generate new harmonics in the discharge current and enhances the dissipated power to generate the plasma.
## 5 Acknowledgment
The authors thank the great discussion with T. Mussenbrock (Ruhr-University Bochum) and his YAPIC code is acknowledged. This project was supported financially by the Academy of Scientific Research and Technology (ASRT), Egypt, Grant No 6742. ASBT is the 2nd affiliation of this research.
|
2301.13432 | Manipulation of polarization topology using a Fabry-Pérot fiber cavity
with a higher-order mode optical nanofiber | Optical nanofiber cavity research has mainly focused on the fundamental mode.
Here, a Fabry-P\'erot fiber cavity with an optical nanofiber supporting the
higher-order modes, TE01, TM01, HE21o, and HE21e, is demonstrated. Using cavity
spectroscopy, with mode imaging and analysis, we observe cavity resonances that
exhibit complex, inhomogeneous states of polarization with topological features
containing Stokes singularities such as C-points, Poincar\'e vortices, and
L-lines. In situ tuning of the intracavity birefringence enables the desired
profile and polarization of the cavity mode to be obtained. These findings open
new research possibilities for cold atom manipulation and multimode cavity
quantum electrodynamics using the evanescent fields of higher-order mode
optical nanofibers. | Maki Maeda, Jameesh Keloth, Síle Nic Chormaic | 2023-01-31T05:57:48Z | http://arxiv.org/abs/2301.13432v1 | Manipulation of polarization topology using a Fabry-Perot fiber cavity with a higher-order mode optical nanofiber
###### Abstract
Optical nanofiber cavity research has mainly focused on the fundamental mode. Here, a Fabry-Perot fiber cavity with an optical nanofiber supporting the higher-order modes, TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\), is demonstrated. Using cavity spectroscopy, with mode imaging and analysis, we observe cavity resonances that exhibit complex, inhomogeneous states of polarization with topological features containing Stokes singularities such as C-points, Poincare vortices, and L-lines. _In situ_ tuning of the intracavity birefringence enables the desired profile and polarization of the cavity mode to be obtained. These findings open new research possibilities for cold atom manipulation and multimode cavity quantum electrodynamics using the evanescent fields of higher-order mode optical nanofibers.
## 1 Introduction
Novel phenomena that can be revealed in non-paraxial light, such as transverse spin and spin-orbit coupling, have led to increasing interest in the tightly confined light observed in nano-optical devices [1]. Optical nanofibers (ONFs), where the waist is subwavelength in size, are useful in this context because they provide very tight radial confinement of the electric field and facilitate diffraction-free propagation over several centimeters [2]. Most ONF research focuses on single-mode ONFs (SM-ONFs) that only support the fundamental mode, HE\({}_{11}\). In contrast, higher-order mode ONFs (HOM-ONFs), fabricated from a few-mode optical fiber, can guide HOMs, such as TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{e}\), and HE\({}_{21}^{o}\)[3]. In the weakly guided regime, which is generally used to describe light propagation in standard optical fiber, this group of modes can be viewed to form the linearly polarized mode, LP\({}_{11}\). To date, there has been a lot more attention paid to HOM-ONFs in theoretical work [4, 5, 6, 7, 8, 9, 10] than experimental work due to the difficulty in precisely controlling the fiber waist size and obtaining selective mode excitation at the waist [3, 11, 12].
In principle, there are many interesting phenomena which can be explored with a HOM-ONF. For example, it has been proposed that the relationship between spin angular momentum (SAM) and orbital angular momentum (OAM) can be studied [5, 10, 13, 14]. Additionally, it was proposed that a HOM-ONF could be used to trap and manipulate cold atoms [4, 15, 16]. Fabrication of an ONF that supports the HOMs was achieved [3, 17, 18] and subsequently shown to more efficiently manipulate dielectric microbeads in the evanescent field than SM-ONFs [19, 20]. Other experimental work has shown that when cold atoms also interact with HOMs, detected signals are stronger than when one uses a SM-ONF only [21].
Introducing a cavity system to the ONF could further increase light-matter interactions due to cavity quantum electrodynamics (cQED) effects [22, 23, 24]. To date, numerous types of SM-ONF-based cavities have been proposed [25, 26, 27, 28, 29, 30] and the interactions of their resonance modes with various quantum emitters have been studied [31, 32, 33]. Strong light-atom coupling using SM-ONF-based Fabry-Perot and ring resonators has already been achieved [34, 35]. Superstrong coupling of cold atoms and multiple longitudinal modes of a long fiber-ring resonator consisting of a SM-ONF section was demonstrated [36]. Utilizing multiple degenerate higher-order transverse modes in free-space has shown to exhibit strong coupling [37, 38], further illustrating the importance of realizing a HOM-ONF-based cavity system at this point. The advantages are
not only for enhanced interactions via cQED effects, but also for a better overall understanding of the behavior of the modes in such a cavity.
Studying the behavior of the HOM-ONF cavity spectrum and the cavity mode profiles gives additional insight into the nature of the HOMs themselves, as well as how they interfere with each other and interact with the external environment. The generation of TE\({}_{01}\) and TM\({}_{01}\) modes in a laser cavity consisting of a microfiber directional coupler-based mode converter was demonstrated previously [39]. However, earlier attempts to realize a passive HOM optical microfiber cavity did not yield any resonant peaks in the cavity spectrum apart from the fundamental modes; in other words, the typical donut- or lobe-shaped intensity profiles associated with HOMs were not observed [40], primarily due to challenges when engineering the taper profile to minimize losses at the taper transitions.
The inhomogeneous polarization structure of HOMs needs to be taken into account when studying a fiber cavity system with a HOM-ONF. In recent years, complex polarization distributions and the generation of polarization singularities have been investigated using various methods, giving rise to the relatively new field of singular optics [41]. Polarization singularities are a subset of Stokes singularities, _i.e._, phase singularity points in Stokes phases [42, 43]. In fact, higher-order fiber eigenmodes are vector optical fields with a polarization singularity called a V-point, where the state of polarization (SOP), _i.e._, how the polarization is distributed in the cross-section of a given mode, is undefined [41]. Other types of Stokes singularities can be formed in elliptical optical fields, such as the polarization singularity of C-points, where the polarization orientation is undefined [41, 42], and Poincare vortices, where the polarization handedness is undefined [43, 44, 45]. Moreover, points of linear polarization can form continuous lines, which are classified as L-lines [41].
The generation of all Stokes singularities within a single beam has been demonstrated using a free-space interferometer [43, 46]. Modal interference in a birefringent crystal can facilitate the creation of polarization singularities [47, 48]. As a result, the SOP can significantly vary along the propagation length, with C-points and L-lines propagating as C-lines, _i.e._, continuous lines of circular polarization, and L-surfaces, _i.e._, surfaces of linear polarization, respectively [47, 48, 49]. Moreover, polarization singularities can appear, move or disappear from a given cross-sectional region with a smooth and continuous change of birefringence [50]. Birefringent media were used to create laser cavity modes containing a polarization singularity [51, 52]. These experiments were limited to the generation of low-order V-points due to a lack of control in the amplitude, phase, and SOP, all of which would be required to create other types of polarization singularities [41]. A few-mode optical fiber cavity has the potential to generate complex laser modes by its highly variable degree of birefringence.
Interference and birefringence are generally inseparable properties in fibers. The modal interference pattern in a fiber changes continually with a periodicity of \(2\pi\) when the relative phase between modes is changed between \(0\) to \(2\pi\) as the eigenmodes propagate along the fiber [53]. This effect was used in a few-mode optical fiber to generate ellipse fields containing a C-point [54, 55]. Due to the increasing complexities of modal interference in few-mode fibers, filtering for the desired set of HOMs, and selectively exciting them to generate and manipulate polarization singularities, are necessary. Realizing a fiber cavity containing an ONF should enable both spatial and frequency filtering for selective excitation of HOMs, as well as enhancement of the resonant mode coupling effect [56, 57].
In this paper, we experimentally demonstrate a HOM-ONF-based Fabry-Perot fiber cavity. The transverse polarization topology of any given resonant mode is determined by selecting modes from the cavity spectra and analyzing the images of the transmitted mode profile. We also demonstrate _in situ_ intracavity manipulation of the modal birefringence to change the amplitude, frequency position, and the SOP of the modes. This work is a significant step towards gaining full control of the evanescent field at the HOM-ONF waist and extends the range of applications
for which such nanodevices could be used.
## 2 Methods
### Experiments
For the HOMs described in Section 1 to propagate throughout the cavity with a HOM-ONF, the nanofiber must be low loss for the entire LP\({}_{11}\) set of modes. Tapered fibers were drawn from SM1250 (9/80) fiber (Fibercore) using an oxy-hydrogen flame pulling rig. The untapered fiber supports the LP\({}_{01}\), LP\({}_{11}\), LP\({}_{21}\), and LP\({}_{02}\) modes at a wavelength, \(\lambda\) = 776 nm. The modes supported by the tapered fiber depend on the tapering profile and the waist diameter. We used two different tapered fibers with waist diameters of (i) \(\sim\) 450 nm for SM behavior (HE\({}_{11}^{o}\) and HE\({}_{11}^{e}\)) and (ii) \(\sim\) 840 nm for the HOM-ONF, which supports HE\({}_{11}^{o}\), HE\({}_{11}^{e}\), TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\). The shape of the tapered fibers was chosen to be trilinear, see Fig. 1(a), with angles of \(\Omega_{1}\) = 2 mrad, \(\Omega_{2}\) = 0.5 mrad and \(\Omega_{3}\) = 1 mrad in order to be adiabatic for the LP\({}_{11}\) and LP\({}_{01}\) modes. Fiber transmission following the tapering process was >95% for the fundamental mode.
A sketch of the experimental setup is given in Fig. 1(b). The cavity was fabricated by splicing each pigtail of the tapered fiber to a commercial fiber Bragg grating (FBG) mirror (Omega Optical). The two FBG mirrors consisted of stacked dielectric mirrors coated on the end faces of fiber patchcords (SM1250 (9/80), Fibercore) and had a reflectivity of 97% at \(\lambda\) = 776 nm. Both mirrors had almost the same reflectivity over all input polarization angles (< 1% variation). The cavity also contained an in-line polarization controller (IPC, see Fig.1(b)) to manipulate the birefringence inside the cavity. Moving the paddles of the IPC induced stress and strain in the fiber, thereby changing the effective cavity length. A typical cavity length was \(\sim\) 2 m, which was physically measured and estimated from the cavity free-spectral range (FSR).
Figure 1: (a) Sketch of tapered optical fiber with trilinear shape, d\({}_{waist}\): waist diameter. (b) Schematic of experimental setup. L: lens, HWP: half-wave plate, PBS: polarizing beam splitter, M: mirror, M\({}_{C}\): cavity mirror, IPC: in-line polarization controller, BS: beam splitter, QWP: quarter-wave plate, which was inserted to calculate S\({}_{3}\), LP: linear polarizer, CCD: camera, MMF: multimode fiber, PD: photodiode.
A linearly polarized Gaussian beam from a laser at \(\lambda=776\) nm (Toptica DL100 pro) was launched into the fiber cavity. The laser frequency was either scanned or locked to a mode of interest using a Pound-Drever-Hall locking module (Toptica Digilock110). The cavity output beam was split into three paths: one for the laser feedback controller to observe the cavity spectra and to lock to specific modes, one for imaging the spatial profile of the modes with a CCD camera, and one for analyzing the transverse SOP of each mode using a removable quarter wave plate (QWP), a rotating linear polarizer, and a CCD camera, see Fig. 1(b). Six intensity profile images were taken in total for each mode. Four images were taken without the QWP and with the linear polarizer angle set to \(0^{\circ}\) (I\({}_{H}\)), \(45^{\circ}\) (I\({}_{D}\)), \(90^{\circ}\) (I\({}_{V}\)), and \(135^{\circ}\) (I\({}_{A}\)), and two images were taken by inserting the QWP set to \(90^{\circ}\) while the polarizer was set to \(45^{\circ}\) (I\({}_{R}\)) and \(135^{\circ}\) (I\({}_{L}\)). The SOPs were determined by analyzing the six profile images using Stokes polarimetry. Furthermore, the Stokes phase and Stokes index were determined [41], see Section 2.3.
### Simulations
Each mode experiences arbitrary birefringence as it propagates along the fiber. The total field in the fiber at any point is the sum of the propagating modes with a corresponding phase shift. The addition of FBG mirrors to the fiber induces an additional birefringence [56, 57], which can be incorporated in a single birefringence matrix. Note, this model does not include cavity boundary conditions since we only aim to simulate the spatial profiles of the fiber modes. We can calculate an arbitrary fiber field, \(\mathbf{E}\), due to interference and birefringence by taking a summation over different fiber modes, such that
\[\mathbf{E}=\sum_{M=1}^{n}J_{M}A_{M}\mathbf{E}_{M}e^{i\phi_{M}}, \tag{1}\]
where \(n\) is the number of eigenmodes to be interfered, \(\mathbf{E}_{M}\) is the electric field of a fiber eigenmode \(M\in\text{TE}_{0,m}\), \(\text{TM}_{0,m}\), \(\text{HE}_{\ell,m}\) and \(\text{EH}_{\ell,m}\), with \(\ell\in\mathbb{Z}^{+}\) being the azimuthal mode order, which defines the helical phase front and the associated phase gradient in the fiber transverse plane. \(m\in\mathbb{Z}^{+}\) is the radial mode order, which indicates the \(m^{th}\) solution of the corresponding eigenvalue equation [5]. \(A_{M}\) is the amplitude, \(\phi_{M}\) is the phase between modes, and \(J_{M}\) represents the arbitrary birefringence Jones matrix of each eigenmode \(\mathbf{E}_{M}\), such that
\[J_{M}=e^{i\eta_{M}/2}\begin{pmatrix}cos^{2}\theta_{M}+e^{i\eta_{M}}sin^{2} \theta_{M}&(1-e^{i\eta_{M}})cos\theta_{M}sin\theta_{M}\\ (1-e^{i\eta_{M}})cos\theta_{M}sin\theta_{M}&sin^{2}\theta_{M}+e^{i\eta_{M}} cos^{2}\theta_{M}\end{pmatrix}, \tag{2}\]
where \(\eta_{M}\) is the relative phase retardation induced between the fast axis and the slow axis, and \(\theta_{M}\) is the orientation of the fast axis with respect to the horizontal-axis, _i.e._, perpendicular to mode propagation.
Let us now consider the system with an ONF supporting \(\text{HE}_{11}^{o}\), \(\text{HE}_{11}^{e}\), \(\text{TE}_{01}\), \(\text{TM}_{01}\), \(\text{HE}_{21}^{o}\) and \(\text{HE}_{21}^{e}\), so that the number of modes that can be interfered is \(n\leq 6\). The cross-sectional profiles and SOPs of \(\text{TE}_{01}\) and \(\text{HE}_{21}^{e}\) are shown in Fig. 2(a, b), respectively. The \(\text{TM}_{01}\) and \(\text{HE}_{21}^{o}\) modes are not shown here but their vector fields are orthogonal to the \(\text{TE}_{01}\) and \(\text{HE}_{21}^{e}\) at every point, respectively. These modes have donut-shape mode profiles with linearly polarized vector fields at any point in the mode cross-section. As an example of possible fiber modes using Eq. 1, Fig. 2(c) illustrates in-phase interference of the \(\text{TE}_{01}\) and \(\text{HE}_{21}^{e}\) modes with equal amplitudes. The resulting mode has a lobe-shape intensity pattern with scalar fields. Fig. 2(d) is an example of a mode resulting from the interference of the circularly polarized \(\text{HE}_{11}\) and an out-of-phase (a \(\pi\)/2 phase difference) \(\text{TE}_{01}\) and \(\text{TM}_{01}\) with equal amplitudes. The SOP, which is overlapped on the intensity profile images, are marked as red and blue ellipse, corresponding to right and left handed orientation, respectively. This mode is the co-called lemon [55], which contains not only linear polarization but also elliptical and circular polarization components in one mode.
Figure 2: Simulations of (a) TE\({}_{01}\), (b) HE\({}_{21}^{e}\), (c) TE\({}_{01}\) + HE\({}_{21}^{e}\) and (d) lemon. The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. The scale bars show the normalized intensity (from 0 to 1) and the Stokes phase (from 0 to 2\(\pi\)). Stokes singularity points of \(\sigma_{12}\), \(\sigma_{23}\), and \(\sigma_{31}\) are indicated as pink, orange, and blue dots, respectively. An L-line is indicated in green.
When using Eq. 1 to simulate mode profiles, a number of eigenmodes with similar intensity patterns and SOPs to an experimentally observed cavity mode were selected as the initial conditions. Next, the variables \(A_{M}\), \(\phi_{M}\), \(\eta_{M}\), and \(\theta_{M}\) were tuned to match the experimentally observed cavity mode intensities, SOPs, and Stokes phases. Polarization topological defects in the simulated modes were then identified, using the method described in the following Section 2.3.
### Analysis
The polarization gradient was calculated in order to identify Stokes singularities in the cross-section of the mode. The gradient map is known as the Stokes phase, \(\phi_{ij}\), which is given by [42, 45]
\[\phi_{ij}=Arg(S_{i}+iS_{j}), \tag{3}\]
where \(S_{i}\) and \(S_{j}\) are Stokes parameters with \(\{i,j\}\in\{1,2,3\}\) in order, and \(i\neq j\). The phase uncertainty points, _i.e._, Stokes singularities, were identified by obtaining the Stokes indices, \(\sigma_{ij}\), which are defined as [42, 45]
\[\sigma_{ij}=\frac{1}{2\pi}\oint_{c}\phi_{ij}\cdot dc, \tag{4}\]
where \(\oint_{c}\phi_{ij}\cdot dc\) = \(\Delta\phi_{ij}\) is the counterclockwise azimuthal change of the Stokes phase around the Stokes singularity. Singularities of \(\sigma_{12}\) are known as V-points and C-points, in vector and ellipse fields, respectively [42]. Singularities of \(\sigma_{23}\) and \(\sigma_{31}\) are known as Poincare vortices [43, 44, 45]. L-lines are located where \(\phi_{23}\) = \(\{0,\pi,2\pi\}\). Table 1 is a summary of the classification of the Stokes singularity types in terms of the Stokes phases and singularity indices with the corresponding polarizations in the vector and ellipse fields [43, 45, 46, 58].
The Stokes singularity points and L-lines were found from the Stokes phases, then superimposed and marked on the mode profiles. As examples, from Figs. 2(a, b), the center of the mode profiles for both TE\({}_{01}\) and HE\({}_{21}^{e}\) contain a V-point, with \(\sigma_{12}\) = -2 and +2 (pink dot), respectively. These points were found from their Stokes phases \(\phi_{12}\) (lower panels in Figs. 2(a, b)). In contrast, the lemon mode in Fig. 2(d) has a closed loop representing an L-line (green) and all three types of Stokes singularities: a C-point with \(\sigma_{12}\) = -1 (pink dot), Poincare vortices with \(\sigma_{23}\) = -1 and +1 (orange dots), and \(\sigma_{31}\) = -1 and +1 (blue dots) were found from \(\phi_{12}\), \(\phi_{23}\), and \(\phi_{31}\), respectively. The lobe-shaped scalar mode in Fig. 2(c) does not have a \(2\pi\) gradient in any associated Stoke phases, since topological defects can only exist in non-scalar fields [41].
\begin{table}
\begin{tabular}{c c c c} \hline \hline Stokes & Stokes phase & Stokes index/ & Polarization \\ singularity & & Phase values & \\ \hline \hline V-point (v) & \(\phi_{12}\) & \(\sigma_{12}\) & Null \\ \hline C-point (e) & \(\phi_{12}\) & \(\sigma_{12}\) & R/L \\ \hline Poincaré & \(\phi_{23}\) & \(\sigma_{23}\) & H/V \\ vortex (e) & \(\phi_{31}\) & \(\sigma_{31}\) & D/A \\ \hline L-line (e) & \(\phi_{23}\) & 0, \(\pi\), \(2\pi\) & Linear \\ \hline \end{tabular}
\end{table}
Table 1: **List of Stokes singularities in vector fields (v) and ellipse fields (e) by the singularity index, \(\sigma_{ij}\), using the Stokes phase, \(\phi_{ij}\), with \(\{i,j\}\in\{1,2,3\}\) in order.**
Results and discussion
### Cavity with a single-mode optical nanofiber
As an initial experimental test, the spectrum for a HOM cavity containing an ONF of waist diameter \(\sim\) 450 nm was obtained, see Fig. 3(a). This ONF waist can only support the fundamental modes. The IPC paddle angles were set so that two distinct, well-separated modes with minimal spectral overlap were observed. The finesses of Modes 1 and 2 in Fig. 3(a) were 12 and 15, respectively. The laser was locked to each of these two cavity modes consecutively and the mode profiles were observed at the output end face of the fiber cavity. The corresponding mode intensity profiles, SOPs, and Stokes phases are shown in Figs. 3(b)(i, ii). The intensity profiles for both Modes 1 and 2 were slightly skewed Gaussian shapes. The HE\({}_{11}\) eigenmode intensity shape is Gaussian, so the slight deviation from the expected shape may be attributed to aberrations in the optical beam path. In terms of polarization distribution, the Stokes phases of Modes 1 and 2 were uniform; in other words, their SOPs were scalar fields, regardless of the IPC paddle angles chosen, as expected for the HE\({}_{11}\) mode.
Although the pretapered fiber supported the full set of eigenmodes in LP\({}_{11}\), LP\({}_{02}\), and LP\({}_{21}\), when the ONF with a diameter \(\sim\) 450 nm was inserted between the two sets of mirrors, only one or two modes with quasi-Gaussian profiles were observed, no matter which IPC paddle angles were chosen. The HOMs were filtered out due to the tapered fiber waist being SM, analogous to an intracavity pinhole spatial filter. Mode filtering as a function of the ONF waist diameter was observed experimentally [17]. However, here, we could additionally observe the mode filtering effect on the cavity spectrum and SOP of each mode.
In an ideal SM-ONF cavity with no birefringence, there are two degenerate orthogonal modes. However, due to random birefringence of the fiber and the cavity mirrors, the two modes become non-degenerate, _i.e._, separated in frequency, leading to coupling between the modes [59]. Mode coupling of orthogonal modes can occur in a birefringent medium and this effect can increase in a cavity configuration [60]. Mode coupling in an ONF cavity due to asymmetrical mirrors has been discussed previously [56] and experimental evidence of mode coupling due to intrinsic birefringence in a SM-ONF cavity has already been reported [57]. In our experiments, non-orthogonal combinations of SOPs were observed, as seen in Figs. 3(b)(i, ii). Mode 1 was horizontally polarized (red/blue lines in Fig. 3(b)(i)), while Mode 2 was left elliptically polarized (blue ellipse in Fig. 3(b)(ii)). By adjusting the IPC angles, it was possible to change the phase relationship and coupling between the HE\({}_{11}^{o}\) and HE\({}_{11}^{e}\) modes, and shift between orthogonal and non-orthogonal combinations of SOPs.
### Cavity with a higher-order mode optical nanofiber
Next, the spectrum for a HOM cavity containing an ONF of waist diameter \(\sim\) 840 nm was obtained, see Fig. 4(a). This ONF can support the HE\({}_{11}\), TE\({}_{01}\), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\) modes. The IPC paddle angles were set to obtain the maximum number of well-resolved modes in a single FSR, see Fig. 4(a). One can clearly see five distinct peaks indicating that the HOM-ONF does not degrade the modes in the cavity and the finesses of the cavity modes are high enough to resolve them. The finesses of Modes 1 to 5 were 12, 16, 13, 22, and 13, respectively. The mode finesse values of the cavity with a HOM-ONF were in the same range as those for the cavity with a SM-ONF (Fig. 3(a)), implying that the HOM-ONF was adiabatic for the LP\({}_{11}\) group of modes. The laser was locked to each of the cavity modes consecutively and the mode profiles were observed at the output of the fiber cavity. The corresponding mode intensity profiles, SOPs, and Stokes phases are shown in Figs. 4(b)(i-iv). In the spectrum shown in Fig. 4(a), there were five distinctive modes, but locking to Mode 3 was not possible because of its close proximity to the dominant Mode 4.
Two flat-top intensity profiles were observed in Modes 1 and 4, Figs. 4(b)(i, iii) respectively.
Figure 3: (a) A typical spectrum for a HOM cavity with a SM-ONF as the laser is scanned over 150 MHz. The spectrum over a single FSR is indicated by the red box. (b) Mode intensity profiles showing the SOPs (top) and corresponding Stokes phases (bottom) for (i) Mode 1 and (ii) Mode 2. The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. The scale bars show the normalized intensity (from 0 to 1) and the Stokes phase (from 0 to 2\(\pi\)).
Figure 4: (a) A typical spectrum for a cavity with a HOM-ONF as the laser is scanned over 150 MHz. The spectrum over a single FSR is indicated by the red box. (b) Mode intensity profiles showing the SOP (top) and the corresponding Stokes phases (bottom) for (i) Mode 1, (ii) Mode 2, (iii) Mode 4, and (iv) Mode 5. The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. The scale bars show the normalized intensity (from 0 to 1) and the Stokes phase (from 0 to 2\(\pi\)). Stokes singularity points of \(\sigma_{12}\), \(\sigma_{23}\), and \(\sigma_{31}\) are indicated as pink, orange, and blue dots, respectively. L-lines are indicated in green. (c) Corresponding simulated results.
The SOPs of these modes are markedly different to those for the Gaussian-type modes in Figs. 3(b)(i, ii), which have simple scalar SOPs. Modes 1 and 4 were inhomogeneously polarized ellipse fields, showing regions of left and right circular polarizations divided by an L-line (Figs. 4(b)(i, iii)). The center of these two modes exhibited diagonal and anti-diagonal polarizations, respectively, _i.e._, the SOPs at the center of the modes were orthogonal to each other. Going towards the edges of the modes, the polarization changes from linear to circular, with opposite handedness either side of the L-lines. Notice also in Fig. 4(a) that Modes 1 and 4 are not well frequency separated from neighboring modes. This suggests that the mode profiles and SOPs of these modes were not only affected by birefringence and degenerate modal interference, but also some non-degenerate modal interference with neighboring cavity modes [60]. Additionally, for Mode 4, we identified two C-points (\(\sigma_{12}\) = -1), indicated by the pink dots in Fig. 4(b)(iii), where the value of \(\phi_{12}\) changed by \(2\pi\) (see Table 1). Interference of HE\({}_{11}\) with modes from the LP\({}_{11}\) group can generate C-points in a few-mode fiber [55], see Fig. 2(d).
We performed basic simulations to determine if combinations of HE\({}_{11}\) and some mode(s) in the LP\({}_{11}\) family could generate similar mode profiles and SOP structures as those in Figs. 4(b)(i, iii). The simulated results are shown in Figs. 4(c)(i, iii). The HE\({}_{11}\) and TM\({}_{01}\) modes were selected as possible contributors and their amplitudes, phase, and birefringence fitting parameters were tuned to match the experimental results. Modes 1 and 4, see Figs. 4(b)(i, iii), could have been formed from different mode combinations rather than our assumed HE\({}_{11}\) and TM\({}_{01}\); however, these modes were very likely formed by interference between HE\({}_{11}\) and some mode(s) of the LP\({}_{11}\) group, resulting in their inhomogeneous SOPs and flat-top shapes.
We also observed two distorted lobe-shaped modes, Modes 2 and 5, see Figs. 4(b)(ii, iv). The lobe-shaped pattern also arises from modal interference between modes in the LP\({}_{11}\) family (as an example, see Fig. 2(c)). With reference to Table 1, Mode 2, Fig. 4(b)(ii), showed all three types of Stokes singularities, indicated by pink dots for C-points (\(\sigma_{12}\) = +1) and orange/blue dots for Poincare vortices (\(\sigma_{23}\) = -1 /\(\sigma_{31}\) = +1), as presented in \(\phi_{12}\), \(\phi_{23}\), and \(\phi_{31}\), respectively. A single mode containing all Stokes singularities has been demonstrated using free-space interferometers [46, 43]; here, we generated them within a single mode using a fiber cavity system. Mode 5, Fig. 4(b)(iv), also had two C-points (\(\sigma_{12}\) = +1) and a Poincare vortex (\(\sigma_{23}\) = +1), as seen in \(\phi_{12}\), and \(\phi_{23}\), respectively. Fig. 4(a) shows that Modes 2 and 5 are not well frequency separated from Modes 1 and 4, respectively. Therefore, there is a likely contribution from the HE\({}_{11}\) mode resulting in distortion of the lobe shape.
To simulate Mode 2 in Fig. 4(b)(ii), we combined TE\({}_{01}\), HE\({}_{21}^{e}\), and HE\({}_{11}\), and to simulate Mode 5 in Fig. 4(b)(iv), we used TM\({}_{01}\), HE\({}_{21}^{e}\), and HE\({}_{11}\). The amplitude of each mode, phase shift, and birefringence parameters were adjusted to achieve a close fit. The simulated results are shown in Figs. 4(c)(ii, iv). These plots are not exact replications of the experimental results since the parameter space is large and the exact initial conditions are not known; nevertheless, the match is reasonably close.
Interestingly, many of the cavity modes obtained in different sets of spectra, which were generated using different IPC angles, exhibited Stokes singularities. Polarization singularities are known to propagate through a birefringent medium as C-lines and L-surfaces and their evolution is affected by the homogeneity of the birefringence along the propagation path [47, 48, 49]. This phenomenon is due to the conservation of the topological charge [49, 58, 61], and the Stokes index value, \(\sigma_{ij}\), remains constant [58]. However, our cavity is an inhomogeneous birefringent medium as it contains a number of different birefringent elements such as the FBG mirrors and the IPC, as such, the degree of birefringence varies along the propagation direction. Therefore, the presence of Stokes singularities in the imaged field at the cavity output does not necessarily guarantee the existence of such topological defects in the ONF region. Nonetheless, singularity points can enter, move and exit with a smooth and continuous variation of birefringence [50]. Therefore, the SOP is expected to evolve along the length of the cavity, with singularity points shifting and
making numerous entries and exits in the cross-section profile of the modes. However, since the ONF waist is relatively straight and uniform, the birefringence variation at the waist should be minimal [62] and topological features appearing at the start of the waist should be preserved every \(2\pi\) along the waist.
Theoretically, the HOM-ONF can support a total of six eigenmodes as mentioned earlier. Therefore, one might expect that the spectrum should show six distinct modes. However, we typically observed three to five distinct peaks in a single FSR depending on the IPC paddle angles. This could be explained by the lack of sufficient finesse to resolve all modes, some of which are closely overlapped [60]. However, it may be feasible to increase the mode finesses by increasing the mirror reflectivity and using an ONF with lower transmission loss than the one used (the estimated loss of Mode 4, the highest finesse in Fig. 4(a), was \(\sim 20\%\)). Nonetheless, the finesse values of our \(\sim 2\) m long cavity with a HOM-ONF should be sufficient for cQED experiments with narrow line-width emitters such as cold atoms.
### In situ higher-order cavity mode tuning
A key feature of this setup is the ability to tune the spectrum and SOP to create the desired mode in the cavity. We aimed to observe modes with donut-shaped intensity patterns and SOPs similar to the fiber eigenmodes TE\({}_{01}\) (Fig. 2(a)), TM\({}_{01}\), HE\({}_{21}^{o}\), and HE\({}_{21}^{e}\) (Fig. 2(b)). To achieve this, the laser was locked to a well-resolved lobe-shaped mode. The paddle angles of the IPC were then adjusted, and the mode shape was monitored with a CCD camera until a donut mode profile was observed. Unlocking and scanning the laser revealed a new spectrum with each mode containing
Figure 5: (a) Mode intensity profiles for quasi-donut-shaped cavity modes from the cavity containing a HOM-ONF with their SOPs (top) and Stokes phases (bottom) similar to the fiber eigenmodes of (i) HE\({}_{21}^{e}\), (ii) HE\({}_{21}^{o}\), (iii) TE\({}_{01}\), and (iv) TM\({}_{01}\). The red and blue SOPs indicate right-handed and left-handed ellipticities, respectively. Scale bars show intensity (from 0 to 1) and Stokes phase (from 0 to \(2\pi\)). Stokes singularities of \(\sigma_{12}\), \(\sigma_{23}\), and \(\sigma_{31}\) are indicated as pink, orange, and blue dots, respectively. L-lines are illustrated as green lines. (b) Corresponding simulated results.
a new profile. The IPC was adjusted again to maximize another mode and the laser was locked to this new mode. The IPC paddle angles were tuned to once more convert the mode profile to a donut shape. This procedure was repeated for four different modes, see Figs. 5(a)(i-iv), and these modes look similar to the true fiber eigenmodes of HE\({}_{11}^{e}\) (Fig. 2(b)), HE\({}_{11}^{o}\), TE\({}_{01}\) (Fig. 2(a)), and TM\({}_{01}\), respectively. There was a slight deformation from a perfect donut shape and their SOPs were not vector fields, but rather ellipse fields with alternating regions of opposite handiness. While the donut eigenmodes possessed a V-point at the center as indicated by pink dots in Figs. 2(a, b), the observed quasi-donut modes in Figs. 5(a)(i-iv) had some nominal intensity at the center. These modes had two C-points of \(\sigma_{12}\) = -1 or +1 near the center (see pink dots in Figs. 5 (a)(i-iv)), as opposed to a single point of \(\sigma_{12}\) = -2 or +2 in the true eigenmodes (Figs. 2(a, b)). Indeed, perturbation of vector field polarization singularities can occur when scalar linearly polarized beams are interfered [63].
These donut-shaped cavity modes were also simulated, as shown in Figs. 5(b)(i-iv). To obtain a good fit for the experimentally observed intensities, SOPs, and Stokes phases in Figs. 5(a)(i-iv), the simulated modes included a slight deformation of the donut shape by adding some components of the HE\({}_{11}\) mode to modes in the LP\({}_{11}\) group. Moreover, the simulated results show that the Stokes phases are very similar to those obtained experimentally. The number of possible combinations of modal interference with varying birefringence is large and this leads to discrepancies between the experiment and simulation. However, these findings indicate that the experimentally observed quasi-donut modes are likely the result of residual interference between the HE\({}_{11}\) mode and modes in the LP\({}_{11}\) group. Degeneracy of multiple modes may be avoided by increasing the cavity mode finesses so that each mode can be well separated. The system demonstrated here shows that, even in a complex system, the HOMs and their SOPs can be controlled to create exotic topological states.
## 4 Conclusion
We have experimentally demonstrated a Fabry-Perot fiber cavity with a HOM-ONF and performed cavity spectroscopy. The cavity mode profiles and transverse polarization topology were also determined by imaging and analyzing the individual cavity modes at the output. These modes had inhomogeneous polarization distributions with a number of Stokes singularities. We also simulated the fiber modes which closely match those observed at the output of the cavity. Moreover, _in situ_ intracavity manipulation of the modal birefringence and interference to select a specific mode of interest was demonstrated. This indicates that the evanescent field of an HON-ONF could be tuned by adjusting the IPC paddle angles.
These findings are a step toward investigating the interactions between SAM and OAM of a HOM-ONF. Research into the interference of HOMs at the waist of an ONF is an exciting opportunity to uncover the nature of light-matter interactions in tightly confining geometries with topological singularities. Additionally, the realization of a (de)multiplexing system using degenerate HOMs in an ONF-based cavity may be possible by improving the tunability of the modal birefringence and interference. Such a system is attractive for future quantum information platforms as efficient and secure storage.
The interference of higher-order cavity modes with fixed ratios in the evanescent field of an ONF may also be used to trap and manipulate cold atoms. Adjusting the overlap and SOP of the HOMs should result in movement of the trapping sites relative to each other, enabling some trap dynamics to be studied [4, 15, 16]. This cavity could be also used with quantum emitters to study multimode cQED effects using degenerate HOMs. The HOM cavity studied here had moderate finesse to enter the cQED experiments for interactions with cold atoms. In free-space optics, strong coupling of multiple transverse HOMs with atoms has been achieved [38], whereas this has not been achieved using an ONF-type cavity. Our work is a significant step towards this realization.
Moreover, the ability of our cavity to generate all three types of Stokes singularities may be useful to realize not only a C-point laser but also an all-Stokes singularity laser using a few-mode fiber. The combinations of fiber modes that we used in the simulations were found via manual trial-and-error estimates to obtain a visual match with the experimentally observed modes. More accurate control could be achieved by using machine learning techniques to fully cover the parameter space of permitted modes in the cavity. This may enable us to determine the correct combination of modes that lead to the observed cavity outputs and facilitate feedback to optimize the input to the system to generate desired modes in the cavity.
## Funding
Okinawa Institute of Science and Technology Graduate University.
Acknowledgments.The authors acknowledge F. Le Kien, L. Ruks, V. G. Truong, and J. M. Ward for discussions and K. Karlsson for technical assistance.
## Disclosures
The authors declare no conflicts of interest.
## Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2309.13210 | Large-area polycrystalline $α$-MoO3 thin films for IR photonics | In recent years, excitation of surface phonon polaritons (SPhPs) in van der
Waals materials received wide attention from the nanophotonics community.
Alpha-phase Molybdenum trioxide ($\alpha$-MoO3), a naturally occurring biaxial
hyperbolic crystal, emerged as a promising polaritonic material due to its
ability to support SPhPs for three orthogonal directions at different
wavelength bands (range 10-20 $\mu$m). Here, we report on the fabrication and
IR characterization of large-area (over 1 cm$^2$ size) $\alpha$-MoO3
polycrystalline films deposited on fused silica substrates by pulsed laser
deposition. Single alpha-phase MoO3 films exhibiting a polarization-dependent
reflection peak at 1006 cm$^{-1}$ with a resonance Q-factor as high as 53 were
achieved. Reflection can be tuned via changing incident polarization with a
dynamic range of $\Delta$R=0.3 at 45 deg. incidence angle. We also report a
polarization-independent almost perfect absorption condition (R<0.01) at 972
cm$^{-1}$ which is preserved for a broad angle of incidence. The development of
a low-cost polaritonic platform with high-Q resonances in the mid-infrared
(mid-IR) range is crucial for a wide number of functionalities including
sensors, filters, thermal emitters, and label-free biochemical sensing devices.
In this framework our findings appear extremely promising for the further
development of lithography-free, scalable films, for efficient and large-scale
devices operating in the free space, using far-field detection setups. | Maria Cristina Larciprete, Daniele Ceneda, Chiyu Yang, Sina Abedini Dereshgi, Federico Vittorio Lupo, Maria Pia Casaletto, Roberto Macaluso, Mauro Antezza, Zhuomin M. Zhang, Marco Centini, Koray Aydin | 2023-09-22T23:14:49Z | http://arxiv.org/abs/2309.13210v1 | # Large-area polycrystalline \(\alpha\)-MoO3 thin films for IR photonics
###### Abstract
In recent years, excitation of surface phonon polaritons (SPhPs) in van der Waals materials received wide attention from the nanophotonics community. Alpha-phase Molybdenum trioxide (\(\alpha\)-MoO3), a naturally occurring biaxial hyperbolic crystal, emerged as a promising polaritonic material due to its ability to support SPhPs for three orthogonal directions at different wavelength bands (range 10-20 \(\upmu\)m). Here, we report on the fabrication and IR characterization of large-area (over 1 cm\({}^{2}\) size) \(\alpha\)-MoO\({}_{3}\) polycrystalline films deposited on fused silica substrates by pulsed laser deposition. Single \(\alpha\)-phase MoO\({}_{3}\) films exhibiting a polarization-dependent reflection peak at 1006 cm\({}^{-1}\) with a resonance Q-factor as high as 53 were achieved. Reflection can be tuned via changing incident polarization with a dynamic range of \(\Delta\)R=0.3 at 45\({}^{\circ}\) incidence angle. We also report a polarization-independent almost perfect absorption condition (R\(<\)0.01) at 972 cm\({}^{-1}\) which is preserved for a broad angle of incidence. The development of a low-cost polaritonic platform with high-Q resonances in the mid-infrared (mid-IR) range is crucial for a wide number of functionalities including sensors, filters, thermal emitters, and label-free biochemical sensing devices. In this framework our findings appear extremely promising for the further development of lithography-free, scalable films, for efficient and large-scale devices operating in the free space, using far-field detection setups.
Keywords: optical phonons, vdW materials, polarization tuning, Reststrahlen band, hyperbolic materials.
## 1 Introduction
Advances in nanophotonics have enabled the miniaturization of optical components due to the exploitation of surface plasmon polaritons (SPPs) [1] in the visible range that can strongly localize electromagnetic fields to small volumes. Recently, doped semiconductors [2; 3] and graphene [4] have been proposed to extend SPPs to the 2-8 \(\upmu\)m range. Also, nanoantennas have been employed to achieve electric field localization in the mid-IR range for sensing applications.
However, in order to take full advantage of surface-enhanced infrared absorption (SEIRA) techniques, a precise positioning of the analyte is required [5].
Moving toward the 8-20 \(\upmu\)m wavelength range, where vibrational absorption peaks provide relevant information on molecular bonds, the SPP approach is less effective due to the poor field confinement at longer wavelengths [6]. Moreover, for the development of a complete IR photonic platform, miniaturization, and integration of optical components with the chip-scale platforms using facile fabrication techniques [7, 8] is highly desired. A conventional IR polarizer is nowadays designed using state-of-the-art holographic techniques [9] onto IR transparent support (CaF\({}_{2}\), ZnSe, BaF\({}_{2}\)) with typical transmission losses of about 30%. Furthermore, the surface of such holographic grid polarizers is extremely delicate, and touching is absolutely to be avoided. Similarly, optical components such as polarization rotators have been realized using artificial metasurfaces [10] in the wavelength range up to 10 \(\upmu\)m. Functionality at longer wavelengths is achieved using a combination of two parallel polarizers and by tilting the plates with respect to each other. Given the complexity of these components, their integration with the chip-scale platform can, therefore, be prohibitive and the challenge for an efficient, integrated and robust IR photonic platform persists.
Recent promising solutions are based on the exploitation of polar materials [11] including ultra-thin van der Waals (vdW) materials such as MoO\({}_{3}\), MoS\({}_{2}\), Ga\({}_{2}\)O\({}_{3}\), hBN [12, 13]. Besides their strong anisotropy related to optical phonons (ideal for polarization rotation and control), they allow strong field localization by the excitation of surface waves called surface phonon polaritons (SPhPs), achieved through the coupling of the electromagnetic field with lattice vibrations. Several works reported on the great potential of polar materials for mid-IR sensing applications up to the terahertz (THz) regime [14, 15] and for the realization of compact IR photonic devices [16, 17].
Among vdW materials, Molybdenum trioxide (\(\alpha\)-MoO\({}_{3}\)) is attracting a great deal of attention [18] as it supports SPhPs in three different wavelength bands for the three orthogonal directions (range 10-20 \(\upmu\)m), rendering this material a naturally hyperbolic and biaxial material [19, 20]. Increased versatility can be obtained by combining it with other materials. Recent results show that \(\alpha\)-MoO\({}_{3}\) can be combined with vanadium dioxide, (VO\({}_{2}\), a phase change material that undergoes insulator to metal phase transition at a temperature of 68\({}^{\circ}\) C) [21, 22] in order to dynamically tune the polariton resonances. A metamaterial approach has also been proposed based on the random nanostructuring of \(\alpha\)-MoO\({}_{3}\) with subwavelength dielectric elements (i.e., air ellipsoidal inclusions). This scheme could increase design versatility as well as tuning and hybridization of polariton modes [23].
Despite huge potential of this promising material, the development of a novel, highly versatile and compact \(\alpha\)-MoO\({}_{3}\)-based IR photonics platforms is hampered by the lack of availability of high-quality scalable films and/or multilayer stacks. \(\alpha\)-MoO\({}_{3}\) for IR photonics and polaritonics is mostly used in the form of physical vapor deposition (PVD)-grown crystalline flakes. Although flakes allow exciting results in terms of hyperbolic phonon polariton excitation along x- and y-directions, there are several drawbacks that might limit the wide adaption of flakes geometries: the existing alignment techniques for flakes with a few tens of nanometers thickness are challenging; the flakes often have irregular shape preventing a good propagation of SPhPs; the dimensions of the flakes are usually limited to few hundreds of \(\upmu\)m at most, therefore the large area or integrated/multifunctional devices are not practical. The fabrication process for obtaining such flakes is very complex, requiring investigation of strategies to create efficient conditions for films growth such as high temperatures (e.g., 780\({}^{\circ}\)C [24]). Furthermore, a successive mechanical exfoliation process for transferring the desired MoO\({}_{3}\) 2D film onto a substrate of interest is needed [25]. In reference [26] a high confinement of near field signal corresponding to a Q factor of 40 has been reported in \(\alpha\)-MoO\({}_{3}\) covering submicron-width trenches. These flakes are, however, difficult to handle and integrate in a practical device while keeping low fabrication costs. Moreover, they are relatively small for far-field applications since their dimensions often reach the diffraction limit of 10-20 \(\upmu\)m range IR radiation thus requiring expensive and state of the art near-field detecting schemes.
The realization of single \(\alpha\)-phase, oriented, large area MoO\({}_{3}\) film is still an open technological challenge. Atomic layer deposition (ALD), has been used to obtain good quality \(\alpha\)-phase MoO\({}_{3}\) films, but only after 500 \({}^{\circ}\)C post-growth annealing [27]. This was necessary because deposition temperatures higher than 200 \({}^{\circ}\)C interfered with the stability of the employed precursor. Furthermore, a long annealing time (\(>\) 1h) was required when the annealing was performed at lower temperatures than 500\({}^{\circ}\)C. This means that, according to [28], ALD cannot be used to perform \(\alpha\)-phase MoO\({}_{3}\) films deposition in one single step, making more difficult a possible integration of the MoO\({}_{3}\) film within a multilayer structure. ALD is, furthermore, an expensive tool employing hazardous precursors gases and even higher temperatures are needed for depositing MoO\({}_{3}\) by sublimation [29].
Conventional sputtering techniques were also employed to deposit MoO\({}_{3}\) films at room temperature. This led to a multiphase crystalline MoO\({}_{3}\) film only after a post-growth annealing process [28-30]. When the annealing is performed at temperatures greater than 400 \({}^{\circ}\)C, it produces monoclinic \(\beta\)-phase MoO\({}_{3}\) films, not useful to exploit an optical phonons response [31]. Pulsed laser deposition (PLD) is a versatile and low-cost deposition technique which has already been
employed for the deposition of \(\alpha\)-phase MoO\({}_{3}\) films at 500 \({}^{\circ}\)C [32] and other metal oxides such as VO\({}_{2}\)[33], and ZnO [34]. Compared with the exfoliation technique, it allows depositing large area MoO\({}_{3}\) films, which can be much more easily handled and integrated into a multilayer structure. However, to the best of our knowledge, a detailed IR characterization aimed at the identification of possible applications of the obtained films has not been reported so far for large area MoO\({}_{3}\) films deposited either by PLD or ALD. In the following, we show that PLD can be employed to obtain \(\alpha\)-MoO\({}_{3}\) films at lower temperatures (e.g. 400 \({}^{\circ}\)C), without using harmful precursor gases normally employed by ALD and without the need of any post-growth annealing. Optical IR reflection spectra reveal a remarkable enhanced tunability of the reflection peak related to the z-axis phonon response as a function of the incident electric field polarization. Moreover, a polarization-independent perfect absorption condition is achieved for a broad angle of incidence. These features are not displayed from a single crystal flake. Our results show, for the first time, interesting possibilities for large-scale, lithography-free, polycrystalline MoO\({}_{3}\) film to be employed for IR signal management.
## 2 Sample Fabrication
### MoO\({}_{3}\) Deposition.
The mainly investigated structure in this study is composed of a 2200 nm (average thickness) MoO\({}_{3}\) film deposited on a fused silica substrate (Figure 1a) using pulsed laser deposition at 400\({}^{\circ}\)C and 0.1 mbar of oxygen pressure. The PLD system employed uses a Q-switched tripled Nd:YAG laser (Quantel mod. YG78C20, \(\lambda\) = 355 nm) generating 6 ns width pulses with an energy of 80 mJ per pulse [32; 33; 35]. The density of energy was maintained at 1.2 J cm\({}^{-2}\), and the repetition rate was 4 Hz. The MoO\({}_{3}\) target was a 1-inch diameter, 0.25-inch-thick disk (purity 99.9%).
Before each deposition, the substrates were cleaned in an ultrasonic bath with acetone, subsequently rinsed with isopropanol and then dried with compressed air. After cleaning, each substrate was clamped onto an electrical heater, which allows achieving temperatures as high as 800 \({}^{\circ}\)C. The heater was then placed inside a vacuum bell jar where oxygen gas can be introduced through an electromechanical valve to maintain the desired pressure.
The PLD deposition yields better crystallinity than sputtering due to its higher kinetic energy of ablated species. Furthermore, the PLD setup allows extremely versatile deposition conditions. Thus, the proper choice of deposition parameters and the resulting fabrication constraints is of crucial importance. In the present work, we focus attention on the best choice of parameters for the narrow band IR polarization filter functionality.
### Structural and Morphological Characterization.
X-ray diffraction (XRD) measurements were performed at room temperature to evaluate the crystalline structure of the deposited layers. XRD analysis was performed by using a D5005 diffractometer (Bruker AXS, Karlsruhe, Germany) equipped with a Cu K\(\alpha\) (1.5406 A) source and operating at 40 kV and 30 mA. The following experimental conditions were used: 5s acquisition time, 0.05\({}^{\circ}\) step in a 5\({}^{\circ}\) - 90\({}^{\circ}\) 2\(\Theta\) angular range. XRD patterns showed that the high-quality MoO\({}_{3}\) films deposited at 400\({}^{\circ}\)C exhibited the stable orthorhombic a-phase of MoO\({}_{3}\), as shown in Figure 1(b). For the sake of completeness, we include in the supporting material the XRD pattern for a similar sample, deposited at lower temperature (i.e., 200 \({}^{\circ}\)C), showing a monoclinic-only phase of the MoO\({}_{3}\) film (Figure S1).
The surface morphology of the MoO\({}_{3}\) thin films has been characterized by using an Anfatech high speed atomic force microscope (AFM). Consistently with X-ray diffraction measurements, AFM images of surface morphology shown in Figures 1c and 1d revealed a grain distribution in the thin film. The average grain size is around 400 nm and root-mean-square (RMS) roughness is about 100 nm. After the deposition, the film thickness was assessed by profilometry using a Dektak 150 profilometer. The average thickness was found to be approximately 2200 nm. Details on the profilometer measurements and a picture of the Sample are reported in Figure S2 of the supporting material file.
## 3 Results and Discussion
### Polarization-dependent reflection measurements.
IR reflection measurements have been performed using a FT-IR interferometer (Invenio-R, Bruker) in the spectral range 6000-400 cm\({}^{-1}\). The IR source was a glow-bar while the detector is based on deuterated triglycine sulfate (DTGS) pyroelectric detector.
A total of 64 interferograms were acquired for each measurement, with a spectral resolution of 1 cm\({}^{-1}\). A sample area of 3x3 mm\({}^{2}\) was selected during IR data acquisition using knife-edge apertures. The FT-IR platform is equipped with a reflectance unit allowing to set the angles of incidence and reflectance, from almost normal incidence (about 13\({}^{\circ}\)) to grazing angles (85\({}^{\circ}\)) as illustrated in Figure (2a). The polarization state of incident light was selected using a holographic polarizing filter with a motorized mounter. Two different sets of measurements were performed with incidence angles of 15\({}^{\circ}\) and 45\({}^{\circ}\), respectively. Specifically, the reflectance spectra were recorded as a function of different linear polarization states of the incoming light. The measured spectral reflectance curves for different incidence angles and polarization of the incoming beam are shown in Figure (2b) and Figure (2c) for 15\({}^{\circ}\) and 45\({}^{\circ}\) incidence angles, respectively. Here 0\({}^{\circ}\) polarization angle stands for p-polarized light while 90\({}^{\circ}\) stands for s-polarized light.
From Figure (2b) we note that the polycrystalline nature of the laser-deposited MoO\({}_{3}\) simultaneously unveils, also at quasi normal incidence, the three Reststrahlen bands associated to alpha phase of MoO\({}_{3}\): the x-Reststrahlen band, corresponding to the frequency range from 820 cm\({}^{-1}\) to 972 cm\({}^{-1}\); the \(y\)-Reststrahlen band, extending at lower frequencies, between 545 cm\({}^{-1}\) and 851 cm\({}^{-1}\) and the \(z\)-Reststrahlen band, which is located between 962 cm\({}^{-1}\) and 1010 cm\({}^{-1}\) which is also partially overlapped with the Reststrahlen band of glass substrate (fused silica) at 1000 cm\({}^{-1}\) and 1300 cm\({}^{-1}\)[18]. Moreover, the polarization-resolved set of measurements, shows that at quasi-normal incidence, the sample exhibits negligible in-plane anisotropy.
Figure 1: (a) Sketch of investigated sample. (b) X-Ray diffraction (XRD) pattern of a MoO\({}_{3}\) film deposited onto fused silica by PLD at 400 \({}^{\circ}\)C and 0.1 mbar oxygen pressure. The assigned peaks correspond to the orthorhombic phase of MoO\({}_{3}\) (ICDD 01-078-4612 card). (c-d) AFM images of a MoO\({}_{3}\) film deposited onto fused silica by PLD: (c) image area 10 \(\times\) 10 \(\upmu\)m\({}^{2}\); (d) image area 5\(\times\) 5 \(\upmu\)m\({}^{2}\).
An ABB Bomen FTLA 2000 FT-IR [36], was also used to measure the reflectance of the samples after twelve months of the sample growth. The results as presented in Figure S3 agree very well with each other.
We note that the three RBs are contiguous in frequency and their sequential overlaps give rise to interesting spectral features: a) a polarization-independent perfect absorption condition at 972 cm-1 (measured reflectivity less than 1%); b) a polarization tunable narrow band reflection peak at 1006 cm-1. The behaviors of these two features are completely different when we consider 45\({}^{\circ}\) incidence angle. Results are displayed in Figure (2c). We note that the perfect absorption condition is almost preserved for both p- and s-polarizations; at 45\({}^{\circ}\) the experimental minimum reflectivity for both s- and p-polarization ranges between 1% and 2%. On the other hand, a strong modulation of MoO\({}_{3}\) film infrared spectral features with the polarization of the incoming light (Figure 2d) has been experimentally observed at 1006 cm-1. Rotating the polarization state of incoming light (as highlighted in the legend) modifies both resonance intensity and width. It is worth noting that the polarization-dependent reflection peak at 0\({}_{\text{max}}\)=1006 cm-1 with a full width at half maximum (FWHM) \(\Delta\omega\)=17 cm-1 corresponding to a quality factor as high as Q=0\({}_{\text{max}}\)/\(\Delta\omega\) \(\sim\)60 is obtained in a lithography-free polar film. We note that the reflection peak is not a pure Lorentzian resonance. In order to provide a more accurate evaluation of the resonance linewidth we considered two Lorentzian-shaped curves respectively fitting the inner and the outer part of the experimental data. Results are reported in the supporting material (Figure S4): the FWHM of the experimental
Figure 2: (a) Sketches of investigated experimental configuration; polarization-dependent (0\({}^{\circ}\)=p-pol, 90\({}^{\circ}\)=s-pol) reflection FT-IR spectra measured at (b) 15\({}^{\circ}\) and (c) 45\({}^{\circ}\) incidence angle from a \(\alpha\)-MoO\({}_{3}\) film, grown on fused silica substrate using pulsed laser deposition; (d) surface plot of FT-IR reflection signal as a function of frequency and different polarization states of the incoming beam, measured at 45\({}^{\circ}\) incidence angle.
resonance has been then retrieved by taking the average between the FWHMs of the Lorentzian curves and the maximum semi-dispersion as FWHM\(\pm\)\(\Delta\)(FWHM) = (19 \(\pm\)3) cm\({}^{-1}\). Thus the Q factor has been evaluated as Q\(\pm\)\(\Delta\)Q=53\(\pm\)8. We finally include in the supporting material (Figure S5) the reflection spectra of the previously mentioned monoclinic \(\beta\)-MoO\({}_{3}\) film (XRD pattern depicted in Figure S1) for an incidence angle of 45\({}^{\circ}\) and several polarization angles. We note that the high-Q polarization-dependent reflection peak at 1006 cm\({}^{-1}\) does not appear. Indeed this feature is specifically related to the \(\alpha\)-MoO\({}_{3}\) optical phonon along the crystal z-axis (OPh\({}_{z}\)) [18].
### Theoretical models.
Theoretical modeling of the optical properties of the polycrystalline \(a\)-MoO\({}_{3}\) film are performed considering the following observations. (1) The AFM images of surface morphology reported in Figures (1c) and (1d) show that the grain sizes are much smaller than the infrared wavelengths in the measurements. (2) The almost negligible in-plane anisotropy demonstrated by the reflectance measurements at 15\({}^{\circ}\) angle of incidence (Figure 2b) implies that the crystallite grains in the material are nearly randomly oriented. (3) The thickness of \(a\)-MoO\({}_{3}\) film varies from about 2000 nm to 2300 nm, as provided by the profilometer (see Figure S2 of the supporting material file). Thus the material should be considered isotropic and homogenous, and an average thickness of 2200 nm is used in the simulation for this sample.
Conventional dispersion analysis of homogeneous composites seeks the effective dielectric function based on the individual constituents, also known as the effective medium theory (EMT) [37]. For example, polycrystal materials with various crystallite sizes can be predicted using a modified EMT proposed by Mayerhofer [38, 39].
We also note that phonon frequencies in the polycristalline sample can be shifted away from the TO and toward the LO position when compared with a perfect crystal. For instance, the resonance at 1006 cm\({}^{-1}\) in Figure 2b and 2c is shifted with respect to the bulk \(a\)-MoO\({}_{3}\)\(\omega_{\rm r,TO}=957\) cm\({}^{-1}\)[18]. This may be caused by the random orientation of crystallites in polycrystalline material, or the effect of air inclusions with an unknown volume fraction since a rough surface of \(a\)-MoO\({}_{3}\) film is observed. Due to such unknown parameters, EMT such as the Maxwell-Garnett theory or an arithmetic average of the principle dielectric functions did not provide a satisfactory agreement with the measured spectrum.
Therefore, an isotropic Lorentz model with three oscillators, which roughly correspond to the frequency values of the oscillators in the \(x\)-, \(y\)-, and \(z\)-directions of a perfect crystal \(a\)-MoO\({}_{3}\), is used to model the effective dispersion of the film:
\[\varepsilon(\omega)=\varepsilon_{inf}+\sum_{i=1}^{3}\frac{S_{i}\omega_{i}^{2} }{\omega_{i}^{2}-\upsilon_{i}(\omega-\omega^{2})}; \tag{1}\]
The resonance frequencies \(\omega\), oscillator strengths \(S_{i}\) damping coefficients \(\gamma_{i}\) in Eq. (1), with \(i\) = 1,2,3, and \(\varepsilon_{inf}\), are determined as fitting parameters by minimizing the RMS deviation between the calculated and measured reflection spectra. The fused silica substrate is modeled using the optical constants from Ref. [40].
Initially, we used [41] to calculate the reflectance for anisotropic stratified media. However, since the \(a\)-MoO\({}_{3}\) film behaves as an isotropic medium we used the standard transfer matrix method for multilayer structures [37] in order to improve the speed and the efficiency of the fitting algorithm. Results obtained with the two methods are in perfect agreement. A thickness of 2200 nm is used in the calculation. The described fitting procedure applied to the experimental reflection spectra at 15\({}^{\circ}\) and 45\({}^{\circ}\) angle of incidences, allowed us to retrieve the parameters for the polycrystalline film with a RMS deviation of 0.054. The obtained parameters are listed in Table 1 with a typical error bound of 20% considering uncertainties in the measurements and fitting.
As mentioned, there exist a shift of the phonon frequencies towards LO for the resonators when compared with the values for a perfect crystal \(a\)-MoO\({}_{3}\) reported in [18]: \(\omega_{\rm r,TO}=545\) cm\({}^{-1}\), \(\omega_{\rm x,TO}=821\) cm\({}^{-1}\) and \(\omega_{\rm z,TO}=957\) cm\({}^{-1}\).
Figure 3 shows the comparison between the modeled and the measured reflectance spectra for s- and p-polarized incident fields at 45\({}^{\circ}\) angle of incidence. In Figure 4 we compare the measured reflectance spectra at 15\({}^{\circ}\) of incidence with the theoretical predictions obtained with the fitted parameters evaluated from the previous data. In both cases, the model calculation is in reasonable agreement with the experiment, except for 600 cm\({}^{-1}\)\(<\omega<\) 1000 cm\({}^{-1}\). The fit of the sample deposited at 400 \({}^{\circ}\)C is not as good as that of the samples deposited at other temperatures. A better agreement could be obtained if more oscillators were used. However, this was not done due to the lack of information, and it is not the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(S_{i}\) & \(\omega_{\rm i}\) [cm\({}^{-1}\)] & \(\gamma_{\rm i}\) [cm\({}^{-1}\)] \\ \hline
1 & 1.224 & 560 & 151 \\ \hline
2 & 0.100 & 841 & 33.0 \\ \hline
3 & 0.023 & 1005 & 3.74 \\ \hline \multicolumn{4}{|c|}{\(\varepsilon_{inf}\) = 2.69} \\ \hline \end{tabular}
\end{table}
Table 1: Fitted Lorentz oscillator parameters for the polycristalline \(a\)-MoO\({}_{3}\) film.
focus of this work. Overall, the RMS deviation between the model and experiments is within 0.06 for the entire spectrum. Details on the modeling of different samples and the effect of thickness are reported in Figures S6 and S7 of the supporting material, respectively.
As a final discussion, we focus on the small discrepancies between theoretical and experimental curves. It is worth mentioning that the reflectance measurement is performed with a measuring spot diameter of the order of a few millimeters. The inhomogeneity of the sampling area with varying surface roughness will result in the phonon frequency shifts to multiple positions. This effect leads to the deviation between experimental and theoretical data evaluated from a fixed frequency-oscillator model for calculation.
## 4 Conclusions
Flat optics requires the design of optical components into thin, planar, films to be integrated into photonic platforms. The use of vdW materials, leads to 2D flat optics with ultra-compact and tunable devices. Nevertheless, atomically thin optical elements suffer from alignment issues and will low possibilities of far-field applications. To overcome this limitation and achieve control and tuning of the spectral features over a large area, we prepared and investigated \(a\)-MoO\({}_{3}\) films using pulsed laser deposition. Although deposition parameter optimization is still required for the definition of a process to synthesize MoO\({}_{3}\) films with a high degree of crystallinity, our experimental findings show remarkable spectral features of the obtained polycrystalline films. Specifically, we reported both a polarization-independent perfect absorption behavior at 962 cm-1 well preserved for a broad angular incidence range, starting from normal incidence and an enhanced tunability vs. light polarization angle of a narrow band reflection peak at 1006 cm-1 with a Q factor of 53\(\pm\)8. The obtained high dynamic range of AR=0.3 with off-normal-excitation (which can be improved with increased incidence angle, see Figure S8 in supporting material) is reliable and repeatable and results in wide tunability that may have great potential for label-free biochemical sensing application and narrow-band detection in the IR range without the use of time- and cost-consuming lithographic processes. In particular, the investigated sharp resonance can find applications to identify a spectral marker associated with specific moieties such as phenylalanine [42] tryptophan [43] to give some examples. We stress the point that the low fabrication cost and the Q-factor are not the only two relevant parameters to take into account. The possibility to operate with large sensing area without the need for microscopy or near-field techniques is an important benefit. Our large-area samples only require a basic far-field source/detector scheme, making them suitable for low-cost, mass distribution devices.
## Acknowledgements
K.A. acknowledges support from the Air Force Office of Scientific Research under Award Number FA9550-22-1-0300. K.A. and M.C.L. also acknowledge the support from University La Sapienza for the Visiting Professor Program 2020 (Bando Professori Visitatori 2020). M.C, M.C.L, M.A. and Z.M.Z. acknowledge the KITP program 'Emerging Regimes and Implications of Quantum and Thermal Fluctuational Electrodynamics' 2022, where part of this work has been done. This research was supported in part by the National Science. Foundation under Grant No. PHY-1748958.
Figure 4: Comparison between the modeled (dash-dotted line) and the measured (solid line) reflectance spectra for s- and p-polarized incident fields at 15\({}^{\circ}\) angle of incidence.
Figure 3: Comparison between the modeled (dash-dotted line) and the measured (solid line) reflectance spectra for s- and p-polarized incident fields at 45\({}^{\circ}\) angle of incidence.
C.Y. was supported by the National Science Foundation (CBET-2029892).
|
2310.00469 | State of In Situ Visualization in Simulations: We are fast. But are we
inspiring? | Visualization of dynamic processes in scientific high-performance computing
is an immensely data intensive endeavor. Application codes have recently
demonstrated scaling to full-size Exascale machines, and generating
high-quality data for visualization is consequently on the machine-scale,
easily spanning 100s of TBytes of input to generate a single video frame. In
situ visualization, the technique to consume the many-node decomposed data
in-memory, as exposed by applications, is the dominant workflow. Although in
situ visualization has achieved tremendous progress in the last decade, scaling
to system-size together with the application codes that produce its data, there
is one important question that we cannot skip: is what we produce insightful
and inspiring? | Axel Huebl, Arianna Formenti, Marco Garten, Jean-Luc Vay | 2023-09-30T19:11:23Z | http://arxiv.org/abs/2310.00469v1 | # State of In Situ Visualization in Simulations:
###### Abstract
Visualization of dynamic processes in scientific high-performance computing is an immensely data intensive endeavor. Application codes have recently demonstrated scaling to full-size Exascale machines, and generating high-quality data for visualization is consequently on the machine-scale, easily spanning 100s of TBytes of input to generate a single video frame. In situ visualization, the technique to consume the many-node decomposed data in-memory, as exposed by applications, is the dominant workflow. Although in situ visualization has achieved tremendous progress in the last decade, scaling to system-size together with the application codes that produce its data, there is one important question that we cannot skip: is what we produce insightful and inspiring?
in situ visualization, high-performance computing, particle-in-cell, reflections, directions, lightning presentation submissions +
Footnote †: FOOTNOTE:+1]Footnote †: thanks: [FOOT
## 1. Introduction
In situ visualization is a tremendously powerful workflow to generate insight into the largest simulations run today. Recently, the 2022 Gordon Bell Prize-winning application WarpX (Bollman, 2022) was used to run in situ visualization on 552 nodes of the Frontier supercomputer (Bollman, 2022).
Immediate visualization of simulation dynamics at scale, from various camera angles, is powerful and helpful, providing answers to domain-science questions such as: Is a simulation evolving as planned? Are numerical options and resolution sufficiently set? Are many hardware or software issues/bugs appearing at scale? Yet, the scientifically most important question is: Does the visualization develop insight?
Gaining scientific insight from simulations is a complex and iterative process, with domain scientists connecting existing theory, empirical evidence and data from experiments and simulations. Visualizations can produce qualitative and quantitative representations of the dynamics at play. These representations can solidify understanding, guide the theoretical model building, help testing approximations and assumptions. An attractive visualization does help to communicate results and might inspire new scientific ideas.
Particularly for the latter part, domain scientists and audiences will compare the quality of their visualization with the state-of-the-art seen in everyday life: movies, games, advertising, etc. That is a high bar, given photo-realistic capabilities in these industries at high frame rates. Based on these expectations, can we produce in situ visualizations of scientific data that can be awe-inspiring and stimulate our minds? And - how much costs and/or scalability penalty are we willing to trade for this in high-performance computing?
## 2. Scalable Methods Wanted
Many algorithms offered in contemporary visualization frameworks (Bollman, 2022; Graf et al., 2018; Graf et al., 2019; Graf et al., 2019) are able to exploit some locality, e.g., by domain decomposing ray traces and iso-contour searches, composing results later on (Graf et al., 2019). Yet, advanced visualization techniques for casting shadows, tracing reflections, sorting collisions with objects, etc. are notoriously non-local and are thus challenging for multi-GPU implementations. Even volume-rendering more than one spatially overlapping source is non-trivial to do _in situ_, since established methods depend on a sampling technique that is hard to scale (Bollman, 2022). Additionally, many visualization techniques that scientists can use on single-node implementations would be highly desirable as distributed implementations for in situ frameworks: Taking Figure 1 as an example, if this was not _in situ_ generated, the authors would add multiple light sources, cast hard and soft shadows, select some isocontours for semi-transparent representation, and would smooth the generated iso-contours, by adding additional triangles that interpolate beyond the original resolution of the data source.
Consequently, there is a continued need for new, innovative, scalable in situ visualization methods. Both fast, low-overhead and higher-overhead (yet scalable), high-quality methods are needed. With respect to scalability, maybe there are tricks one can lend from other communities to generate artificial locality: occlude far-focus parts with mist as in gaming, simplify shadow masks and reflections, or aggressively exploit the adaptive resolution of mesh-refined data sources. Additionally, successful in situ implementations and workflows can likely be enhanced and benefit from evolution through standardization of APIs, vendor abstractions, render scene control and data descriptions, e.g., (Bollman, 2022; Graf et al., 2019; Graf et al., 2019).
## 3. Selected in Situ Visualization Needs
Adding to the challenges of addressing expectations set from offline rendering for in situ visualization, we surveyed the Beam, Plasma & Accelerator Simulation Toolkit (BLAST) (Bollman, 2022; Graf et al., 2018; Graf et al., 2019) codes and identified three selected needs specific to in situ visualization.
First, we noticed that domain scientists have to relearn how to express rendering scene descriptions for each in situ tool. Standardization is needed (Graf et al., 2019). Another approach might be domain-specific options in the simulation input language, automating the creation of visualization-configuration templates with mostly defaulted options - ready to be configured further for details by the inclined scientists when needed.
Second, video generation of iso-contours, glyphs (e.g., vectors placed in space), etc. often create "flicker" effects for surfaces and pointing of objects, simply based on the roughness of simulation data and steps selected for visualization. Research into transitions (or animations) between key/data frames with low memory overhead for HPC could be beneficial to reduce such effects.
Third, we also identified a commonly used algorithmic and simulation pattern for which in situ visualization would be ideally suited, but are not aware of any implemented solution yet: rendering of spatially-sliced data pipelines. In a large class of modeling codes, efficient solutions can be calculated by splitting the 3D domain over one axis. Instead of advancing the whole domain by an update, algorithms update a slice of the domain, e.g., from the back to the front of the 3D domain, and parallelize for the third spatial axis in _time_. Without spatially sliced rendering tools, a large number of algorithms and codes currently need to fall back to costly data output to "reconstruct" the spatial data domain that is required at once in offline visualization. Examples in laser-plasma and accelerator physics are the boosted frame technique (Graf et al., 2019; Graf et al., 2019; Graf et al., 2019) as shown in figure 1 (a more meaningful representation would transform slice-wise to the laboratory frame), the quasi-static method (Graf et al., 2018), or representations in reference trajectory space instead of time and space (Graf et al., 2019).
We believe addressing these challenges is timely and resulting in situ visualization will provide insight and inspiration for scientists.
## Acknowledgments
This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This research was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of High Energy Physics, Scientific Discovery through Advanced Computing (SciDAC) program. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
State of In Situ Visualization in Simulations:
We are fast. But are we inspiring?
|
2309.05472 | LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for
Self-supervised Representations of French Speech | Self-supervised learning (SSL) is at the origin of unprecedented improvements
in many different domains including computer vision and natural language
processing. Speech processing drastically benefitted from SSL as most of the
current domain-related tasks are now being approached with pre-trained models.
This work introduces LeBenchmark 2.0 an open-source framework for assessing and
building SSL-equipped French speech technologies. It includes documented,
large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous
speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to
one billion learnable parameters shared with the community, and an evaluation
protocol made of six downstream tasks to complement existing benchmarks.
LeBenchmark 2.0 also presents unique perspectives on pre-trained SSL models for
speech with the investigation of frozen versus fine-tuned downstream models,
task-agnostic versus task-specific pre-trained models as well as a discussion
on the carbon footprint of large-scale model training. Overall, the newly
introduced models trained on 14,000 hours of French speech outperform
multilingual and previous LeBenchmark SSL models across the benchmark but also
required up to four times more energy for pre-training. | Titouan Parcollet, Ha Nguyen, Solene Evain, Marcely Zanon Boito, Adrien Pupier, Salima Mdhaffar, Hang Le, Sina Alisamir, Natalia Tomashenko, Marco Dinarelli, Shucong Zhang, Alexandre Allauzen, Maximin Coavoux, Yannick Esteve, Mickael Rouvier, Jerome Goulian, Benjamin Lecouteux, Francois Portet, Solange Rossato, Fabien Ringeval, Didier Schwab, Laurent Besacier | 2023-09-11T14:13:09Z | http://arxiv.org/abs/2309.05472v2 | LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
###### Abstract
Self-supervised learning (SSL) is at the origin of unprecedented improvements in many different domains including computer vision and natural language processing. Speech processing drastically benefitted from SSL as most of the current domain-related tasks are now being approached with pre-trained models. This work introduces _LeBenchmark 2.0_ an open-source framework for assessing and building SSL-equipped French speech technologies. It includes documented, large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to one billion learnable parameters shared with the community, and an evaluation protocol made of six downstream tasks to complement existing benchmarks. _LeBenchmark 2.0_ also presents unique perspectives on pre-trained SSL models for speech with the investigation of frozen versus fine-tuned downstream models, task-agnostic versus task-specific pre-trained models as well as a discussion on the carbon footprint of large-scale model training.
keywords: Self-supervised learning, speech processing, dataset, speech benchmark, French language Pacs: 89.20.Ff, 07.05.Mh Pacs: 68T07, 68-04 +
Footnote †: journal: Computer Speech & Language
## 1 Introduction
Throughout solving pretext tasks automatically extracted from massive unlabeled data, Self-Supervised Learning (SSL) powered deep learning systems deliver groundbreaking performance across a wide range of domains including audio, speech, and language processing [1; 2; 3], computer vision [4; 5], robotics [6], embedded devices and sensors [7], and medicine [8; 9]. In the specific context of speech processing, almost every sub-field has been largely impacted by newly available large pre-trained SSL models. Indeed, impressive improvements and state-of-the-art performance in competitive datasets have been reported for Automatic Speech Recognition (ASR) [10; 11; 12], Automatic Emotion Recognition (AER) [13; 14; 15], Automatic Speaker Verification (ASV) [16; 17; 15], Automatic Speech Translation (AST) [18; 19], Spoken Language Understanding (SLU) [15; 20; 21; 22], Speech Enhancement (SE) [23; 24; 25], Speech Separation (SS) [23; 26], and many others. Despite most leaderboards being conceived around the English language, SSL has also been reported to be remarkably useful for under-resourced languages as demonstrated by A. Babu et al. [19] and H. Nguyen et al. [18], drastically increasing the accessibility to cutting-edge speech technologies across many languages.
Naturally, the flourishing field of SSL for speech calls for fair comparisons and standardized evaluation protocols properly assessing the added value of each newly introduced architecture. Following other early-adopter domains including Natural Language Processing (NLP) with, for instance, the GLUE [27] and SuperGLUE benchmarks [28], a first English-only evaluation suite appeared: SUPERRB [15]. In the latter, 13 different tasks based on well-known
datasets have been bundled together to benchmark novel SSL models following a common fine-tuning evaluation protocol. Nonetheless, SUPERB does not standardize the pre-training process and hyperparameters, and models trained on hundreds of thousands of hours of speech appear in the same leaderboard as those learned with a few hundred hours or even different languages. SUPERB has been extended to generative tasks such as speech enhancement following the same standardized evaluation protocol in SUPERB-SG [26]. _LeBenchmark_ approached the issue of SSL benchmarking in French from a unified perspective by freezing the available pre-training data as well as the fine-tuning procedure [29; 14]. It also introduced a set of pre-trained SSL models available to the community including the largest and best-performing French SSL systems. Aside from these two attempts, most SSL models currently are being compared in an arbitrary and heterogeneous fashion across different pre-training and fine-tuning datasets, evaluation protocols, and hyperparameters tuning. As a matter of fact, the standardization and available resources revolving around SSL evaluation remain scarce, and it is of crucial interest to the community that further efforts are put in those directions. Indeed, the scientific value of a released model may only be validated if proven against a rigorous, fair and replicable evaluation protocol.
With the first version of _LeBenchmark_[29; 14], and following the definition of D. Schlangen [30], we aimed at providing the necessary foundations for the investigation and comparison of SSL models towards French-based downstream tasks. _LeBenchmark 2.0_ builds upon the latter accomplishment to provide a standardized, fully replicable, and extended framework for the assessment and development of SSL representations of French speech. In particular, we release a well-curated pre-training set containing up to fourteen thousand hours of heterogeneous French speech (Section 3), three novel pre-trained SSL models ranging from thirty million to one billion parameters (Section 4), as well as two new evaluation tasks for ASV and syntactic analysis of spoken French (Section 5). _LeBenchmark 2.0_ also widens the discussions on the topics of the energy footprint of SSL models (Section 6), the difference between language-specific and language-agnostic pre-training (Section 5). In short, _LeBenchmark 2.0_ is a collective attempt at unifying the community of SSL for the French language around common models, datasets and evaluation protocols.
## 2 Evaluating SSL models with LeBenchmark 2.0: background and motivations
SSL for audio and speech is the process of defining and solving unsupervised proxy tasks, also referred to as pretext tasks or workers, motivated by the nature of the input signal itself. Proxy tasks define both an objective and a transformation applied to the training samples to extract the training targets. In practice, SSL models are first pre-trained following the latter strategy before turning into frozen or fine-tuned feature extractors for common supervised learning tasks. A major benefit of SSL approaches is the ability to leverage the ever-growing mass of available unlabeled data to drastically increase the performance observed on much more expensive and complex to obtain human-labeled tasks. In the context of audio and speech, SSL-based systems occupy the top ranks of most leaderboards and are widely adopted by the community with up to a tenth of recent proceedings from top-tier speech conferences (i.e. year 2023) containing at least one reference to SSL.
SSL strategies for audio and speech may be divided into four different families: generative, predictive, contrastive, and multi-task. Generative methods aim at reconstructing the input audio signal after various corruptions ranging from masking to additive noises. For instance, Mockingjay [31], Tera [20], DecoAR 2.0 [32], Speech-XLNet [33], MPC, pMPC [34] and data2vec [35] optimize their parameters towards the reconstruction or reordering of masked/shuffled input frames while Autoregressive Predictive Coding (APC) [36] reconstructs the input signal. Predictive systems, including WavLM [12], HuBERT [10], or BEST-RQ [37] aim at predicting unsupervised discrete labels (e.g. clustering) obtained from the input samples. Contrastive approaches such as wav2vec 2.0 [11] or Contrastive Predicting Coding (CPC) [38], on the other hand, optimize their latent representation to facilitate the distinction between positive and negative candidates originating from the given signal. Finally, multi-task SSL proposes to combine different objectives or modalities to build a rich feature extractor. For example, PASE+ [39] merges up to ten different workers ranging from signal reconstruction to contrastive learning during the pre-training process.
Such a rich landscape of models may be seen both as a curse and a blessing by the scientific community. It offers a wide range of possibilities and research directions but also suffers from a strong lack of evaluation standards. In fact, even the simple task of identifying the best-performing paradigm for a specific downstream task remains impossible with the current state of the art in SSL for audio and speech. Indeed, the construction and evaluation of SSL models may vary along numerous axes, hence drastically hindering the ease of comparison between novel
solutions. _LeBenchmark 2.0_ specifically aims at standardizing those axes to speed up, facilitate, and democratize research around SSL pre-training in French.
More precisely, the life cycle of any SSL model is comprised of three major events: pre-training data gathering, training, and downstream evaluation. Ideally, the two latter steps should be merged, enabling the evaluation and comparison of SSL models at pre-training, hence alleviating a time-consuming downstream assessment. In practice, however, this idea appears as a major scientific and technical challenge as the literature relies entirely on the above-described three-step process. Unfortunately, each step may introduce important variations leading to heterogeneous and unreplicable evaluation protocols. For instance, PASE+ was trained on 100 hours of speech while HuBERT processed 60,000 hours, making it easy to define the best-performing model but practically impossible to distinguish the best pre-training strategy. Other variations include, but are not limited to: differences in pre-training languages and data type during step one (e.g. spontaneous against read speech), compute resources at step two (e.g., a single Nvidia GTX 1080 Ti for Mockingjay against 128 Nvidia Tesla V100 for wav2vec 2.0) or the lack of standards during downstream fine-tuning at step three (e.g., fine-tuning against frozen feature extractors, pre-training dataset included or excluded from the downstream evaluation, or simply the list of downstream tasks to include). Ultimately, such requirements, and particularly the need for large compute resources limit the access to SSL pre-training research to a tiny subset of well-equipped institutions and companies, drastically limiting the exploration and emergence of novel paradigms.
Aside from pre-training efficiency, the community naturally attempted to standardize the third step while developing and comparing their models. For instance, ASR evaluation using the Librispeech dataset can be found for MockingJay, wav2vec 2.0, HuBERT, or WavLM, while speaker recognition with VoxCeleb has been reported in PASE+ and MockingJay. Nonetheless, in most cases, the employed downstream architectures, evaluation protocols, or hyper-parameters are entirely different, making it impossible to distinguish models that differ strongly in their pre-training process (e.g. PASE+ and HuBERT). This also prevents a strict comparison between close-performing models (e.g. WavLM and HuBERT).
The increasingly adopted SUPERB benchmark [15] defines a set of English downstream tasks to compare SSL models, hence facilitating step three. Despite a long list of 13 tasks, SUPERB suffers from a constrained fine-tuning procedure that forces all pre-trained SSL feature extractors to remain frozen and use a fixed decoder to solve the task of interest. Unfortunately, state-of-the-art SSL results and real-life use cases mostly, if not only, come with a joint fine-tuning of the SSL extractor and the decoder. S. Zaiem et al. [40] have also demonstrated that freezing all the downstream architectures and reducing them to a tiny subset could lead to misleading leaderboard rankings. Since the data preparation of step one is not standardized within SUPERB, it remains challenging to compare different SSL pre-training methodologies as the amount and quality of the data often vary between available SSL models. _LeBenchmark_ is the first attempt at standardizing both steps one and three as well as providing replicable and competitive baselines for step two for further investigation from the community interested in the French language.
Finally, the current trend in large-scale SSL is to associate hundreds of languages [19] during pre-training without any regard to potential biases or degradation in performance induced by such a mixing. However, it remains unclear if combining unrelated and potentially distant dialects may harm the performance observed compared to a model trained on a single and well-resourced language (e.g. English). In particular, with _LeBenchmark_, we decided to benefit from the available unsupervised and heterogeneous French speech corpora available to train multiple language-specific SSL models [29, 14], and we have demonstrated that such well-designed models usually outperform massively multilingual alternatives. Interestingly enough, and despite French being the fifth most spoken language, _LeBenchmark_ is the only attempt at standardizing the data collection, pre-training, and evaluation phases of French SSL models. With _LeBenchmark 2.0_ we wish to further enhance our already adopted unified SSL framework for the French community, as both industry and academic institutions delivering state-of-the-art speech technologies are now building SSL-powered solutions.
More precisely, _LeBenchmark 2.0_ extends [29, 14] in every aspect composing the framework and the three steps of the SSL life-cycle:
* _SSL data collection_. [29] and [14] offered carefully curated and documented corpora with 3,000 and 7,000 hours of French respectively. _LeBenchmark 2.0_ extends the openly available pretraining resources to 14,000 hours with the same quality of documentation.
* _SSL models pre-training_. [29] and [14] delivered up to seven pre-trained SSL models to the community based on
the well-known Fairseq toolkit [41]. Following our newly introduced 14,000 hours of data, _LeBenchmark 2.0_ brings three more models, of which two are the largest ones available, to the community. Pre-training and model sharing is conducted with HuggingFace and SpeechBrain [42], two frameworks renowned for their open-science-minded approach. We also propose to extend the analysis and discussion on the energy footprint of large SSL models.
* _SSL Benchmarking._[29] and [14] released four standardized tasks to evaluate and compare SSL models in French: ASR, AST, SLU, and AER. _LeBenchmark 2.0_ extends this evaluation protocol to six tasks with the introduction of automatic speaker verification and syntactic analysis. We also widened the comparison with the state-of-the-art, and language-specific against language-agnostic models.
## 3 Gathering large collections of datasets
Up until recently, it was difficult to find publicly available large datasets of French speech (with the exception of EPAC). Recently, large multilingual corpora that include French have been made available, such as MLS [43] (1,096 h), or voxpopuli [44] (+4,500 h). However, these are restricted to either read or well-prepared speech, failing to provide diversity in the speech samples, such as accented, spontaneous and/or affective speech. In this work, we gathered a large variety of speech corpora in French that cover different accents (MLS, African Accented Speech, CaFE), acted emotions (GEMEP, CaFE, Att-Hack), telephone dialogues (PORTMEDIA), read (MLS, African Accented French, MaSS) and spontaneous sentences (CFPP2000, ESLO2, MPF, TCOF, NCCFr), broadcast speech (EPAC) and professional speech (Voxpopuli). Furthermore, to extend the amount of speech data used for pre-training by around 7k hours we also collected the audiocite.net dataset of non-professional read speech. Compared to MLS and Voxpopuli, our dataset is more diverse, carefully sourced and contains detailed metadata (speech type, and speaker gender). Moreover, it has a more realistic representation of speech turns in real life, compared to MLS and VoxPopuli. Each dataset is documented and can be accessed at least for research purposes.1 This section summarizes the datasets collected and how they were organized for the pre-training step and gives a short overview of the new _audiocite.net_ dataset.
Footnote 1: Some of them being released by ELRA, they are available for a small fee.
### Overview of the Datasets Used for Pre-training
Table 1 summarizes the statistics of the complete list of datasets considered for the study. The datasets have been organized in five main groups.
**Small dataset (\(\approx\) 1K hours)** is only composed of the MLS corpus for comparison with Wav2Vec2.0 [11] which uses only read English speech. It is also gender-balanced.
**Medium-clean dataset (\(\approx\) 2.7K hours)** contains MLS and EPAC only to enable further investigation on the impact of spontaneous speech on SSL representations. EPAC is a corpus of conversational speech in broadcast news.
**Medium dataset (\(\approx\) 3K hours)** includes 2,933 h of speech, from which 1,115 h is read speech, 1,626 h broadcast speech, 123 h spontaneous speech, 38 h acted telephone dialogues, and 29 h acted emotional speech. Regarding gender, we collected 1,824 h from male speakers, 1,034 h from female speakers, and 74 h from unknown gender.
**Large dataset (\(\approx\) 7.7K hours)** has 4 additional corpora: MaSS, NCCFr and Voxpopuli (unlabeled + transcribed). It includes 7,739 h of speech, from which 1,135 h is read speech, 1,626 h broadcast speech, 165 h spontaneous speech, 38 h acted telephone dialogues, 29 h acted emotional speech, and 4744 h professional speech. Except for NCCFr, no information about gender is given in these datasets.
**New Extra large dataset (\(\approx\) 14K hours)** has two additional corpora: audiocite.net and Niger-Mali Audio Collection. Audiocite.net includes freely shareable audiobooks of more than 6 600 hours. We created this dataset specifically for the project and section 3.2 gives details about how it has been acquired. The Niger-Mali Audio Collection is data
web-crawled from Studio Kalangou and Studio Tamani websites, with the authorization of Fondation Hirondelle. The gender labels were automatically produced by the LIUM_SpkDarization tool [61]. With these two added datasets, the Extra-large dataset is then composed of read speech (7,834 hours), broadcast speech (1,737 h), spontaneous speech (165 h), acted telephone dialogues (38 h), acted emotional speech (29 h), and professional speech (4,744 h).
**New Gender-specific datasets (\(\approx\) 1k hours)** are built using all datasets present in the Large dataset that contain gender information: MLS, Att-Hack, CaFE, CFPP2000, ESLO2, EPAC, GEMEP, PORTMEDIA, TCOF, NCCFr. For EPAC, we keep the totality of female speech (385 h), and downsample the male speech to a comparable amount (413 h). This results in 1,041 h of female speech, and 1,006 h of male speech in the final gender-specific datasets.
**Pre-processing for SSL training:** Recordings were segmented using time stamps from transcriptions, or cut every 30 seconds when there was no transcription (VoxPopuli _unlabeled_, audiocite.net). When available, we retrieved speaker labels and gender information. Following [11], we removed utterances shorter than 1 s, and longer than 30 s. When
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Corpus\({}_{\text{Linux}}\)** & **\# Utterances** & **Duration** & **\# Speakers** & **Mean Utt. Duration** & **Speech type** \\ \hline \multicolumn{6}{c}{**Small dataset -1k**} \\ \hline MLS French\({}_{\text{CEFA40}}\)[43] & **263,055** & **1,096-03** & **178** & **15\(\ast\)** & Read \\ & 124,590 /138,465 / - & 520:13 /576:29 / - & 80 /98 / - & 15 / 15 / 15 / - & \\ \hline \multicolumn{6}{c}{**Medium-clean dataset -2.7k**} \\ \hline EPAC\({}^{\ast}\)\({}_{\text{MC}}\)[45] & **623,250** & **1,062-0** & **1,063-0** & **1,063-0** & **9\(\ast\)** & Radio \\ & 465,859 /157,391 / - & 1,240,100 /138:852 / - & \(-\)/\(-\)/\(-\) & \(-\)/\(-\) & Broadcasts \\ \hline
**2.7k dataset total** & **886,305** & **2,722-45** & - & - & - \\ & 590,449 /295,856 / - & 1,760,23 /962:21 / - & - & - & - \\ \hline \multicolumn{6}{c}{**Medium dataset -3k**} \\ \hline African Accented French\({}_{\text{Linux}}\)[46] & **16,402** & **185-06** & **232** & **4\(\ast\)** & Read \\ & 373 / 102 / 15,927 & \(-\)/18:56 & 48/36 / 148 & \(-\)/\(-\)/\(-\) & Read \\ \hline Att-Hack\({}_{\text{CEFA5KD}}\)[47] & **36,309** & **27.02** & **20** & **27.7\(\ast\)** & Acided \\ & 16,564 / 19,755 / - & 12,071 /14:54 / - & 9 /11 /1 /26 / 27.2\(\ast\)/ - & Emotional \\ \hline CaFE\({}_{\text{CCF}}\)[48] & **936** & **1.99** & **12** & **4.4\(\ast\)** & Acided \\ & 468 / 468 / - & 0.32 / 0.36 / - & 6 / 6 / - & 4.2 / 4.7 / 3.5 / - & Emotional \\ \hline CFPP2000\({}_{\text{CEFA5FA}}\)* & **9853** & **166-06** & **6** & **6\(\ast\)** & Spontaneous \\ & 166 / 1,184 / 8,503 & 0:14 / 1:56 / 14:16 & 2 / 4 / 43 & 5 / 5 / 6 / 8 & Spontaneous \\ \hline ESLO2\({}_{\text{MC}}\)[50] & **62,918** & **361-21** & **190** & **1.9\(\ast\)** & Spontaneous \\ & 30,440 / 32,147 / 331 & 170:16 / 1657 / 009 & 68 / 120 / 2 & 1/ 1/ 9.5 / 1.7\(\ast\) & Spontaneous \\ \hline GEMEP\({}_{\text{MC}}\)[51] & **1,236** & **0.50** & **10** & **2.5\(\ast\)** & Acided \\ & 616 / 620 / - & 0.24 / 0.26 / - & 5 / 5 / - & 2.4 / 2.5 / - & Emotional \\ \hline MPF [52], [53] & **193,827** & **190.60** & **114** & **3.5\(\ast\)** & Spontaneous \\ & 5,326 / 4,649 / 9,552 & 5:26 / 4:36 / 9,03 & 36 / 29 / 49 & 3.7 / 3.6 / 3.4\(\ast\) & Acided telephone \\ \hline PORTMEDIA\({}_{\text{MC}}\) & **194,07** & **38-59** & **193** & - & Acided telephone \\ (French) [54] & 9,294 / 10,333 / - & 19:08 / 19:50 / - & 84 / 109 / - & 7.4 / 6.9 / 5.3\(\ast\) & dialogue \\ \hline TCOF\({}_{\text{CEFA5KA5}}\) & **58,722** & **353-59** & **749** & **3.5\(\ast\)** & **3.5\(\ast\)** & Spontaneous \\ (Audio)[55] & 10,377 / 14,763 / 33,582 & 9:33 / 12:39 / 31:46 & 119/ 162 / 468 & 3.3 / 3.3 / 3.4\(\ast\) & Spontaneous \\ \hline
**Medium dataset total** & **1,111,806** & **2,930,324** & **2,930,324** & & - & - \\ & 664,073 / 379,897 / 67,395 & 1,824:53 / 1,034:15 / 74:10 & - & - & - \\ \hline \multicolumn{6}{c}{**Large dataset -7k**} \\ \hline MaSS [56] & **8,219** & **19.40** & **19.40** & **10k** & **8.6\(\ast\)** & Read \\ & 8,219 / - & 19:40 / - & \(-\)/\(-\)/ & 8.6\(\ast\)/\(-\)/ & - \\ \hline NCCF\({}_{\text{MC}}\)[57] & **29,421** & **26.30** & **46** & **3\(\ast\)** & Spontaneous \\ & 14,570 / 13,927 / 929 & 12:44 / 12:59 / 00:50 & 24 / 21 / 1 & 3 / 3 / 3 / 3 / 3 3 / 3 & Spontaneous \\ \hline Usypopuli\({}_{\text{Linux}}\)[44] & **566,338** & **4,302,17** & **14k** & **29\(\ast\)** & Professional speech \\ _Unlabeled_ & - - / - & - / - / - & -/ - / - & - \\ \hline Usypouli\({}_{\text{CEFA}}\)[44] & **76,281** & **211.57** & **327** & **10k** & Professional speech \\ _unresolved_ & - / - / - / - 211:57 & - / - & - / - & - \\ \hline
**Large dataset total*** & **1,814,242** & **7,739-22** & - & - & - \\ & 682,212 / 388,217 / 99,084 & 1,853:02 / 1,041:57 / 4,845:07 & - & - \\ \hline \multicolumn{6}{c}{**Extra Large dataset - 14k**} \\ \hline Auidicite.net\({}_{\text{CCF-MF}}\)[58] & **817 295** & **6098-15** & **130** & **29\(\ast\)** & Read \\ & 425033 / 159 / 691 / 232 571 & 3477:24 / 1309:49 / 1911:21 & 35 / 32 / 63 & 29 \(\ast\) / 29 / 29 \(\ast\) & Read \\ \hline Niger-Mali Audio Collection [59][60] & **38 332** & **111:01** & **357** & **10\(\ast\)** & Radio broadcasts \\ & 18 546/ 19 786 / - & 52:15 / 58:46 / - & 192 / 165 / - & 10 \(\ast\) /
possible, overlapping speech sentences were also removed. When necessary, audio segments were converted to mono PCM 16 bits, 16 kHz.
### Audiocite.net Dataset
Audiocite.net is a corpus of read French speech scrapped from the www.audiocite.net website in November 2021 thanks to the kind authorization of the website administrator. The website is composed of voluntary work of speakers who explicitly uploaded their content under a CC-BY (public domain) license2 The audiobooks are available online for free and are classified into 15 categories: tales, world news, short stories, poetry, erotic stories, documentaries, science fiction, novels, animals, audiocite-juniors, religions, kitchen, philosophies, history and theatre. All the original texts are either in the public domain or under an open license.
Footnote 2: Some of them have supplementary conditions: SA (Share Alike), ND (No Modification) or NC (No commercial Use).
The Audiocite.net is composed of more than 6600 hours or recordings from 130 speakers and can be found distributed on OpenSLR (www.openslr.org/139/) with the same license as the original work. All the recordings are distributed in their raw format as we downloaded them from audiocite.net (with background music, noise, unexpected speakers, mp3 format, mono or stereo). No pre-processing was applied to the files nor ASR performed on them. We, however, added information of the gender in a 'best effort' manner by guessing the gender from the name and checking the voice in case of uncertainty. This information must not be considered as ground truth and is only intended to be used for a rough statistical estimate. No attempt to remove speech that could be seen as offensive or sensitive was made. Although the dataset is provided with training, validation and testing partitions, the whole corpus was used for LeBenchmark 14K model training.
## 4 Building an Open Collection of Pre-trained French SSL Models
_LeBenchmark 2.0_ introduces three novels pre-trained French wav2vec 2.0 to the community based on the Extra Large dataset (i.e. 14,000 hours of speech): 14K-light, 14K-large and 14K-xlarge. More precisely, _LeBenchmark 2.0_ is an open collection of 14 pre-trained SSL models made entirely available on the HuggingFace platform3. It is worth noticing that the number of released SSL models has doubled from _LeBenchmark_ to _LeBenchmark 2.0_ as four others have been added for preliminary gender analyses from M. Z. Boito et al. [62]. The latter four models are not depicted in Table 2, as they were introduced in [62]. In practice, the three new models cover different use cases as the 14K-large and 14K-xlarge are expected to deliver top-notch performance in unconstrained and centralized environments while our 14K-light will bring SSL features to more resource-constrained devices. As of now, these additions represent both the most powerful and parameters-efficient SSL-powered models for the French language.
Footnote 3: [https://huggingface.co/LeBenchmark](https://huggingface.co/LeBenchmark)
### On the Choice of Wav2vec 2.0
At the time of _LeBenchmark_, wav2vec 2.0 was the only open-source and available SSL pre-training strategy. It naturally fitted our requirements as it was also achieving state-of-the-art performance. According to the SUPERB benchmark [26], the three best-performing pre-training strategies to date are WavLM, HuBERT, and wav2vec 2.0. However, no implementation of WavLM may be found to replicate the pre-training process and the reported results. HuBERT, on the other hand, suffers from a much more complex training process due to the iterative refining of the discrete targets obtained with k-means. Furthermore, and as depicted in [26], the downstream performance of _BASE_ and _LARGE_ models for HuBERT and wav2vec 2.0 are similar despite a slight advantage for HuBERT, potentially originating from an extensive hyperparameters tuning. Indeed, from our experience, the SSL pre-training behavior varies significantly following hyperparameter changes. In summary, the wav2vec 2.0 architecture enables _LeBenchmark 2.0_ to compare fairly with previously introduced French models while retaining state-of-the-art performance compared to existing alternatives.
### Pre-training Hardware and Sofware Environments
Large-scale SSL pre-training was mostly conducted within the Jean Zay French supercomputer converged platform made available to researchers by GENCI4. As of January 2023, Jean Zay is offering access to 2,696 Nvidia Tesla V100 (16GB and 32GB) split between four and height GPU per node with dual Intel CPU as well as 416 Nvidia Tesla A100 80GB with dual AMD CPU. Jean Zay is mostly powered by nuclear energy, hence facilitating a low carbon emission rate per FLOPS. The reported Power Usage Effectiveness (PUE) ratios are 1.21 and 0.65 depending on the scenario (i.e. considering the heat transfer to nearby infrastructures), putting Jean Zay in the list of the most efficient supercomputers worldwide5. Most models except 14K-xlarge have been trained on nodes equipped with four 32GB Nvidia Tesla V100 hence triggering multi-node training to reach the desired 32 or 64 GPU. 14K-xlarge was trained with 80GB Nvidia Tesla A100 nodes equipped with height GPU each. Data read and write operations were made throughout a fast Nested File System (NFS) without any streaming library. A total of 2.9 TB of storage was necessary to hold the entire 14,000 hours dataset. In practice, wav2vec 2.0 models could be trained with much fewer and less powerful GPU, but at the cost of significantly longer training time (i.e. mostly intractable) due to the gradient accumulation needed to reach the large batch size required by the contrastive learning nature of wav2vec 2.0.
Footnote 4: GENCI Jean Zay official presentation: [http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)
Footnote 5: [https://systematic-paris-region.org/wp-content/uploads/2022/06/slideshow-Hub-Day-HPPC-Hybride.pdf](https://systematic-paris-region.org/wp-content/uploads/2022/06/slideshow-Hub-Day-HPPC-Hybride.pdf)
The speech and audio processing toolkits landscape has significantly expanded in the last decade, however, only two tools support the full wav2vec 2.0 pre-training: Fairseq [41] and SpeechBrain [42]. In practice, all models trained with the Large dataset (7,000 hours) or smaller sets have been produced with Fairseq, while the others have been trained with SpeechBrain by adapting the wav2vec 2.0 CommonVoice recipe to our data. The Python environments are corresponding to those detailed in the toolkit installation scripts attached to each commit.
### Wav2vec 2.0 Hyperparameters
The wav2vec 2.0 architecture can be summarized in four distinct blocks: an acoustic feature extractor made of a convolutional neural network, a latent or contextual extractor composed of a Transformer network, a quantization module, and the final contrastive block. The entire detailed list of architectural hyperparameters (i.e. more than 70 parameters) for each model can be found in the corresponding HuggingFace repository6. In short, all models share the same CNN encoder architecture and mostly differ in the hidden dimension size and depth of the Transformer and quantizer. For instance, the sizes of the intermediate and hidden layers are \([3072,4096,5120]\) and \([768,1024,1280]\) for the _base_, _large_, and _xlarge_ models respectively. The number of blocks in the Transformer also increases following the model size with [12; 48; 24]. In practice, _LeBenchmark 2.0_ follows the configurations initially reported by A.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & **Pre-training data** & **Parameters Count** & **Output Dimension** & **Updates** & **GPU Count** & **GPU Hours** \\ \hline
1K-_base_[29] & 1,096 h & 90M & 768 & 200K & 4 & 1,000 \\
1K-_large_[29] & 1,096 h & 330M & 1024 & 200K & 32 & 3,700 \\ \hline
2.7K-_base_[29] & 2,773 h & 90M & 768 & 500K & 32 & 4,100 \\ \hline
3K-_base_[29] & 2,933 h & 90M & 768 & 500K & 32 & 4,100 \\
3K-_large_[29] & 2,933 h & 330M & 1024 & 500K & 32 & 10,900 \\ \hline
7K-_base_[14] & 7,739 h & 90M & 768 & 500K & 64 & 7,900 \\
7K-_large_[14] & 7,739 h & 330M & 1,024 & 500K & 64 & 13,500 \\ \hline
**_LeBenchmark 2.0_** & & & & & & \\
**14K-_light_** & 14,000 h & 26M & 512 & 500K & 32 & 5,000 \\
**14K-_large_** & 14,000 h & 330M & 1,024 & 1M & 64 & 28,800 \\
**14K-_xlarge_** & 14,000 h & 965M & 1,280 & 1M & 104 & 54,600 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the pre-trained wav2vec 2.0 models delivered with LeBenchmark and LeBenchmark 2.0. Newly released models are denoted in bold. “GPU Hours” refer to the total training time cumulated over “GPU Count” to reach the number of “Updates”.
Baevki et al. [11] and A. Babu et al. [19] as extensive hyperparameter and architecture searches are intractable even with Jean Zay resources.
### Pre-training Hyperparameters
The extensive list of pre-training hyperparameters is reported in the _LeBenchmark_ HuggingFace repository while the most impactful ones are given in the following. The duration of each model pre-training is measured in _"steps"_ or _"updates"_ referring to an effective update of the neural network weights or a call to the _".backward()"_ function in PyTorch. This quantity varies with the amount of available data as well as the number of neural parameters to optimize. For instance, the 14K-xlarge made one million updates compared to 200,000 for the 1K-large. Increasing the number of updates ultimately leads to better downstream performance. Nevertheless, the latter behavior must be contrasted with the high cost associated with longer training times as many dozens of GPU are being used at once. Again, we fixed the number of steps according to the original wav2vec 2.0 and XLS-R. All models are trained with the Adam optimizer and decoupled weight decay (i.e. AdamW) [63] following a two-step scheduler made of around 8% of warmup steps and a polynomial decay to zero.
Each training step is then associated with an effective batch size measured, for convenience, in seconds. All models from _LeBenchmark 2.0_ have been trained with an effective batch size of between two and three hours. For instance, the 14K-large model used 40 GPU that could fit 118 seconds of speech each per batch alongside a gradient accumulation factor of two, resulting in a total effective batch size of \((40\times 118\times 2)/3600=2.63\) hours of speech signal per step.
Ensuring a constant effective batch size necessitates a constant amount of signal per GPU. To this extent, both Fairseq and SpeechBrain toolkits implement a dynamic batching mechanism that automatically bundles samples together depending on their size to match the desired maximum boundary. The latter boundary depends on the VRAM capacity of the GPU and varies with the size of the model. For instance, the 14K-large model was trained with a boundary of 118 seconds on 32GB GPU while the 14K-xlarge model stayed at 60 seconds with 80GB GPU.
To limit the VRAM consumption due to the quadratic increase in complexity of the Transformer self-attention following the sequence length, all samples are cropped at 20 seconds. The remaining signal is simply used as a different example. Similarly, and to avoid masking the entirety of the sample for contrastive loss, segments shorter than 2 seconds are removed from Fairseq models while they are concatenated with SpeechBrain. For the largest models, i.e. 14K-xlarge, audio samples are cropped at 15 seconds as no real degradation from slightly shorter sequences is to be expected, as demonstrated by Y. Gao et al [64].
Finally, masks applied to the output of the feature extractor (i.e. CNN) are of 10 consecutive frames for all models. Masking probabilities, however, change with the model size. Indeed, 50% of the frames are masked for _base_ models compared to 75% for the _large_ and _xlarge_ models.
### Wav2vec 2.0 Pre-training: tips and tricks
Due to the very high entry ticket to pre-training large SSL models, almost no resources exist to help the community with this process. In particular, we found both Fairseq and SpeechBrain wav2vec 2.0 pre-training recipes do not simply transfer seamlessly to the _LeBenchmark 2.0_ datasets. Such difficulties in training certainly originate from the high complexity of the pipeline: hundreds of millions of neural parameters on thousands of hours of speech with dozens of distributed GPU. In the following, we propose a few tips and tricks to avoid the two most common issues encountered while pre-training _LeBenchmark 2.0_ models: exploding losses and collapsing representations.
Exploding losses (i.e. NaN values) were the most common issue faced when training for longer periods of time. Indeed, all models trained for more than 500M steps experienced infinite losses at some point whether it was with Fairseq or SpeechBrain. As expected, simply changing the learning rate did not help as both toolkits already carefully designed the scheduler as well as gradient clipping strategies to avoid any perturbation in the gradient flow. In fact, the mixed precision training was most of the time responsible for this issue. Hence, upon explosion, a simple resuming of the training with full precision (i.e. fp32) was sufficient to finish the training. This may be explained by extremely small weights or values appearing after extended training with AdamW due to the weight decay, hence reaching the rounding-to-zero floor of 16-bit floats. It is worth noticing that switching to bfloat16 (i.e. fp32 values range) instead of float16 solved entirely the issue with SpeechBrain while preserving the training speed.
Collapsing representations may happen when the contrastive task is too hard or too simple. It may easily be spotted at training time as the accuracy, defined as the cosine similarity between projected and quantized latent representations
over the initially masked frames, quickly jumps to values close to 100%. In practice, this can easily be avoided by increasing the diversity loss and the mask length or probability. Indeed, a very high accuracy may simply mean that only very few entries of the quantization codebook are used, hence easy to predict. The latter phenomenon may especially arise with a dataset composed of short audio segments.
## 5 Standardized and Replicable Benchmark of French Tasks for SSL Models
_LeBenchmark 2.0_ introduces two new candidates to _LeBenchmark_ for a total of six tasks: automatic speech recognition (ASR), spoken language understanding (SLU), automatic speech translation (AST), automatic emotion recognition (AER), syntactic analysis (SA) and automatic speaker verification (ASV). We designed our benchmark to best cover the different criteria that the community may expect from pre-trained extractors. In particular, SSL models must integrate information related to transcription (ASR), semantics (SLU, SA), translation (AST), and paralinguistics (AER, ASV). To validate the robustness of SSL to varying data volumes, we also selected corpora matching all potential use cases: high (ASR), medium (SLU, AST, ASV), and low (AER, SA) resource.
**SSL Models configurations.** In all the following evaluations, SSL models may be described as _"fine-tuned"_ or _"frozen"_ and _"task-agnostic"_ or _"task-specific"_. Indeed, and contrary to the SUPERB benchmark, we are investigating different scenarios corresponding to various real-life applications. In SUPERB, all models are frozen, meaning that the neural parameters of the SSL models are not updated with the downstream task of interest. Having a heterogeneous set of downstream decoders is of crucial importance to avoid variations in the final ranking as demonstrated by S. Zaiem et al. [40]. In _LeBenchmark 2.0_, we also investigate the results obtained with fine-tuned models, where both the SSL model and the downstream decoder are trained to solve a given task, hence reaching much higher levels of performance. The latter is done at the cost of a more compute resource-intensive fine-tuning stage. Task-agnosticism or specificity simply defines the standard pre-trained SSL models or their already finetuned equivalent. For instance, one may wish to first fine-tune a wav2vec 2.0 architecture on speech recognition before evaluating it on SLU. The latter ASR-specific model is referred to as being "task-specific".
**Considered SSL baselines.** Among the different evaluations reported below, _LeBenchmark 2.0_ aims at providing two different levels of study: (a) evaluate the relative performance improvement between the different provided French SSL models, and (b) evaluate the relevance of language-specific SSL models against multilingual or out-of-domain large-scale models. First, and as _LeBenchmark 2.0_ is the only resource available for French SSL, the former comparison will be only conducted with our own models. Second, _LeBenchmark 2.0_ wav2vec will be compared to XLS-R-300M and XLS-R-1B [19] for the multilingual aspect. Whisper [65] will only be considered in a few tasks as two major drawbacks prevent its use in a fair comparison: (a) the training data is virtually unknown and may already contain the train or test sets of our downstream tasks, and (b) it is based on weakly supervised training, not SSL.
### Automatic Speech Recognition
Automatic speech recognition is a common downstream task to evaluate SSL models. We investigated the behavior of the different _LeBenchmark 2.0_ models with two scenarios: a challenging low-resource scenario with only 22 hours of training data on TV and radio shows, and a high-resource scenario with 428 hours of read speech.
**Downstream datasets.** Two different French datasets were used. The first one is the ETAPE corpus. This is the official dataset released during the French ETAPE evaluation campaign in 2011 [66]. It is composed of diverse French shows: TV news, TV debates, TV amusement, and radio shows. These shows are split into three subcorpora - training: 22h, validation: 7h, and testing: 7h. ETAPE is distributed in the ELRA catalogue 7. It is free for academic research. The second dataset is the French part, version 6.2, of the well-known Common Voice project [67]. This project started in July 2017 and employs crowdsourcing for both speech data collection and human validation for a large number of languages. The speech data is made of read sentences extracted from Wikipedia. It contains 428h,
24h, and 25h of speech data for the training, validation, and testing sets respectively.
**Downstream models and hyperparameters.** To conduct our experiments, we employed the SpeechBrain toolkit [42], which is built on PyTorch and designed specifically for speech-processing tasks. Additionally, we utilized the Hugging Face version of the _LeBenchmark_ models even though fairseq checkpoints are also available. The SpeechBrain toolkit offers a diverse array of recipes, and to ensure the reproducibility of our experiments, we followed the SpeechBrain recipe specific to the CommonVoice ASR task. In both of the aforementioned scenarios, we initiated with a pre-trained _LeBenchmark_ model and added three additional dense hidden layers of size 1,024 on top with random initialization. Each of these layers was associated with the LeakyReLU activation function. Subsequently, we performed fine-tuning for speech recognition on the training data by applying the SpecAugment data augmentation technique. For optimization, we employed the Connection Temporal Classification (CTC) loss function [68]. The overall model's output consists of 78 tokens, encompassing both characters, sub-word units, and the CTC blank character. This number is higher than in English due to the presence of numerous accepted letters in French. For example, variations derived from the letter e include e, e, e, and e. Other letters like \(\varsigma\) or \(\alpha\) have also to be taken into account. Following the SpeechBrain recipe for CommonVoice, we optimized the model using two optimizers. The Adam optimizer was dedicated to the parameters derived from the _LeBenchmark_ wav2vec2.0 model, while the AdaDelta optimizer was used for all the parameters on top of it. We applied a dropout technique to train the three top dense hidden layers, with a probability of 0.15. For each experiment, the training was made on 80 epochs, keeping the model that reached the best results on the development data.
**Results analysis and discussions.** In our prior study [14], we demonstrated that _LeBenchmark_ SSL models pre-trained solely on French data outperformed equivalent models pre-trained on English or multilingual data, which also included French. Our new findings focus on assessing the impact of additional data on the pretraining of the French dataset used for SSL. As depicted in table 3, and consistent with the results described in [14], the 1K-_large_ model exhibits a higher word error rate compared to the 3K-_large_ model.
Section 3.1 provides detailed information about the content of the pretraining datasets used for the different models. Incorporating 2,000 hours of primarily broadcast news speech with the initial 1,000 hours of read speech used to pretrain the 1K-large model significantly improves the performance of the 3K-large model for speech recognition. However, further augmentation with 7,000 hours of formal read or prepared speech (parliamentary events) does not yield substantial improvement for the 7k-large model. Nevertheless, the performance of the 7K-large model is still significantly better than the 3K-large on broadcast news (ETAPE) and comparable to read speech (CommonVoice). This phenomenon is worsened by the introduction of the 14K hours dataset, as 14K-large and 14K-light are not able to outperform even the 3K-large. This can be explained by the nature of the added data, i.e. read speech, that may simply not help at reaching better performance above a certain threshold. In most cases, the 14K-light model exhibits degraded performance for automatic speech recognition even though speech recognition rates remain better than those of XLSR-53 as reported in [14].
### Spoken Language Understanding
Spoken Language Understanding (SLU) aims at extracting semantic representations from speech signals containing sentences in natural language [69]. Classical approaches to SLU used a cascade model made of an Automatic
\begin{table}
\begin{tabular}{l|c|c||c|c} \hline \hline
**Corpus** & \multicolumn{2}{c||}{**CommonVoice**} & \multicolumn{2}{c}{**ETAPE**} \\ \hline
**Features** & **Dev** & **Test** & **Dev** & **Test** \\ \hline
1K-large & 9.49\(\pm\)0.20 & 11.21\(\pm\)0.23 & 28.57\(\pm\)0.79 & 30.58\(\pm\)0.88 \\
3K-large & **8.00\(\pm\)**0.19 & **9.27\(\pm\)**0.20 & **22.26\(\pm\)**0.76 & 24.21\(\pm\)0.85 \\
7K-large & 8.02\(\pm\)0.18 & 9.39\(\pm\)0.21 & **21.34\(\pm\)**0.74 & **23.46\(\pm\)**0.83 \\ \hline
14K-light & 19.86\(\pm\)0.28 & 22.81\(\pm\)0.34 & 58.30\(\pm\)0.66 & 59.82\(\pm\)0.7 \\
14K-large & 8.39\(\pm\)0.19 & 9.83\(\pm\)0.21 & 23.67\(\pm\)0.81 & 26.03\(\pm\)0.89 \\
14K-large & 8.26\(\pm\)0.19 & 9.83\(\pm\)0.21 & 22.38\(\pm\)0.95 & 24.67\(\pm\)0.83 \\ \hline \hline \end{tabular}
\end{table}
Table 3: ASR results in terms of Word error rate (WER%, lower is better) on Common Voice and ETAPE corpora, with pre-trained wav2vec2.0 models further fine-tuned on labeled ASR data.
Speech Recognizer (ASR) feeding a Natural Language Understanding (NLU) module [70; 71; 72; 73; 74; 75; 76]. Neural networks led to large advances for _end-to-end_ SLU systems [77; 78; 79; 80; 81; 82; 83; 84], which are preferred to cascade systems, in particular for their ability to reduce error propagation effects and to exploit acoustic components to deduct semantic information [85].
**Downstream datasets.** For French SLU benchmarking we used the well-known MEDIA corpus [86], used also in [14] and allowing thus for direct comparison. The MEDIA corpus focuses on the domain of hotel information and reservations in France. It is made of 1,250 human-machine dialogues transcribed and annotated with 76 semantic concepts. The corpus is split into 12,908 utterances (41.5 hours of speech) for training, 1,259 for development (3.5 hours), and 3,005 for test (11.3 hours).
**Downstream models and hyperparameters.** SLU models used in this paper are the same as in [14], few modifications have been introduced which will be described along this section. Such models have a _sequence-to-sequence_ architecture based on LSTMs and attention mechanisms [87; 88]. The encoder has a similar pyramidal structure as the one proposed in [89], the decoder uses two attention mechanisms, one for attending the encoder's hidden states, and one for attending the decoder's previous predictions, like the self-attention module of Transformers [90]. One difference with respect to models proposed in [14] is that we added a layer normalization after each decoder layer, which made learning more stable when using features extracted with SSL models as input, and improved the model's generalization. All models were trained to minimize the CTC loss [68]. In all our experiments we use SSL models as feature extractors. Features were given as input to SLU models as an alternative to traditional features (e.g. MFCC). Following [14], we used both task-agnostic and task-specific SSL models. In the task-specific case, SSL models were fine-tuned for the ASR output like in [14].
Models described in [14] were learned with three training steps. Each training step uses the model learned in the previous step for initializing the current model's parameters. While this strategy is the most effective, it implies a relatively high training cost. In this work we instead use full end-to-end training such as in [91]. Hence, models are learned from scratch in a single training procedure. In addition, in this work, we tested a multi-task learning setting where the encoder and the decoder are learned with two different CTC losses: with respect to the ASR output for the encoder; with respect to the SLU output for the decoder following a standardized output format [14]. This multi-task learning setting will be indicated with _mt_ in Table 4. In order to study the impact of the SLU model size on results, especially when using features from SSL models, we tested hidden layers of size 256 and 300. Since these gave similar results in most cases, we did not further optimize the model size. We also found it beneficial, when using fine-tuned SSL model's features, to increase the temperature at the output softmax layer to 2 (indicated as _t2_ in the table). This strategy has been used successfully for model distillation [92], and intuitively has a similar effect as using smoothed labels as targets [93]. Beyond these differences, all models use the same hyper-parameters as those used in [14].
**Results analysis and discussion.** All results on the SLU task are depicted in table 4. We report Concept Error Rate (CER, the lower the better) on development and test data (respectively columns **Dev** and **Test** in the table), as well as the raw error rate (column **RawER**) on the development data, which is the error rate computed on the raw output of the system. The raw output of the system includes words, concept boundaries and concepts, please refer to the appendix of [14] for details.
In order to compare with previous work [14], we report results obtained using basic features as input, with features extracted with the French 7K-large wav2vec 2.0 model, the 14K-large and 14K-xlarge models, the XLSR-53-large and XLS-R-xlarge models, and with the Whisper small, medium and large models. Results with the spectrograms, 7K-large, XLSR-53-large features are comparable with [14]. The other results are contributions of this work. For each experiment, we report the best results with respect to the hidden layer size on the development data, and its corresponding multi-task learning setting (_mt_). We also specify _t2_ in the table if the best results were obtained with a temperature of 2 at the output softmax.
Our best result on validation data with spectrogram features is 30.44, which is only slightly worse than the 29.07 obtained in [14], with the advantage that in this work, the model is trained end-to-end in one step with the multi-task setting. Additionally, the increased generalization power of the model allows us to reach an error rate of 30.39 on test data, which is slightly better than the 31.10 reported in [14].
Using input features from _LeBenchmark_ models (7K-large; 14K-light, large, and xlarge) improvements on SLU
results are impressive, with the best CER respectively on validation and test data of 15.66 and 14.43 obtained with the French wav2vec2 14K-xlarge model's features. It is interesting to see, yet expected, that the more data is used to train the SSL model, and the bigger the SSL model in terms of parameters, the better the SLU results are. This trend is not completely respected with task-specific fine-tuned models, in particular with the 14K-large model. Most intuitively, this is because of the small amount of data available for the downstream task, which does not allow for an optimal fine-tuning of so many parameters. The fact that SSL models are tuned for ASR output may also play a role since SLU output already contains tokens instantiating concepts and this may lead the model toward more accurate token predictions which do not always correspond to tokens triggering concepts. This last fact is supported by raw error rates (**RawER** column), accounting for tokens, concept boundaries, and concepts, where not always the best result corresponds to the best CER on validation or test data. On an absolute scale nevertheless, results obtained with fine-tuned French wav2vec2 models are the best. The concept error rate of 12.71 on test data, obtained with the _7K-large_ model, is the new state-of-the-art on this task with an end-to-end model. When using fine-tuned model features, the best results on the test data do not always correspond to the best results on validation data, underlying that probably a more accurate hyper-parameter tuning is needed. An interesting outcome of SSL fine-tuned models is that the best results are almost always obtained with an increased softmax temperature (\(t2\)). In particular, we observed erratic training behaviors with the default temperature, leading often to gradient explosions. Using an increased temperature not only allows us to obtain good results but also stabilizes model training.
For comparison, we also experimented with some multi-lingual models from the literature, namely XLSR-53-large and XLS-R-xlarge [19], and _whisper_[65] with small, medium (corresponding to a larger model than our large size, please refer to the number of parameters in the table) and large size. While the XLSR-53 large model, both
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{5}{c|}{Corpus: MEDIA, Metric: Concept Error Rate (CER \%) \(\downarrow\)} \\ \hline
**Model** & **Params (SSL/SLU)** & **Input** & **RawER** & **Dev** & **Test** \\ \hline \hline \multicolumn{5}{|c|}{**Task-agnostic models**} \\ \hline h300 & -/13.18 & spectrogram & 59.36 & 64.96 & 58.12 \\ h300 mt & -/13.18 & spectrogram & **30.44** & 30.24 & **30.39** \\ \hline \hline h256 [14] & 330/12.18 & TF-large & - & 19.68 & 18.77 \\ h300 & 330/15.45 & TF-large & 17.26 & 18.57 & 16.99 \\ h300 mt & 330/15.45 & TF-large & **15.36** & **16.62** & **15.47** \\ \hline h256 & 26/12.18 & 14K-light & 22.71 & 22.62 & 20.92 \\ h256 & 26/12.18 & 14K-light & **19.29** & **19.41** & **18.67** \\ \hline h300 & 330/15.45 & 14K-large & 18.26 & 18.63 & 17.35 \\ h300 mt & 330/15.45 & 14K-large & **15.91** & **16.62** & **14.43** \\ \hline h256 & 965/12.70 & 14K-xlarge & 21.44 & 17.52 & 16.24 \\ h256 mt & 965/12.70 & 14K-xlarge & **14.73** & **15.66** & **14.43** \\ \hline \hline h256 [14] & 330/12.18 & XLSR-53-large & - & **18.45** & 18.78 \\ h256 & 330/12.18 & XLSR-53-large & 18.38 & 18.99 & 18.68 \\ h256 mt & 330/12.18 & XLSR-53-large & **17.74** & 18.84 & **17.16** \\ \hline h300 & 965/16.07 & XL-8rlarge & 18.02 & 20.75 & 30.08 \\ h300 mt & 965/16.07 & XL-8rlarge & 18.53 & **18.57** & **29.07** \\ \hline \hline h256 & 244/11.66 & whisper-s & 27.86 & **28.44** & 27.19 \\ h256 & 769/12.18 & whisper-m & 34.03 & 33.75 & 31.10 \\ h256 & 1550/12.71 & whisper-t & **26.39** & 28.92 & **26.90** \\ \hline \hline \multicolumn{5}{|c|}{**Task-specific models (ASK fine-tuning)**} \\ \hline h256 [14] & 330/12.18 & 7K-large & - & 14.58 & 13.78 \\ h300 t2 & 330/15.45 & 7K-large & 10.82 & **13.53** & 12.80 \\ h300 mt \(\underline{t2}\) & 330/15.45 & TF-large & 10.83 & 13.95 & **12.71** \\ \hline h300 & 26/15.45 & 14K-light & 15.07 & 17.03 & 20.02 \\ h300 mt & 26/15.45 & 14K-light & **13.66** & **15.48** & **18.22** \\ \hline h256 t2 & 330/12.18 & 14K-large & **10.43** & **12.96** & 13.78 \\ h256 mt \(\underline{t2}\) & 330/12.18 & 14K-large & 13.76 & 14.43 & **12.85** \\ \hline h256 t2 & 965/12.70 & 14K-xlarge & **11.19** & **13.74** & **13.88** \\ h256 mt \(\underline{t2}\) & 965/12.70 & 14K-xlarge & 11.35 & 14.58 & 14.53 \\ \hline \hline h300 & 330/15.45 & XLSR-53 & 18.45 & 17.42 & **15.01** \\ h300 mt & 330/15.45 & XLSR-53 & **13.09** & **15.22** & 16.60 \\ \hline h300 t2 & 965/16.07 & XLS-R & 14.87 & 17.22 & 24.15 \\ h300 mt \(\underline{t2}\) & 965/16.07 & XLS-R & **13.39** & **15.42** & **23.30** \\ \hline \end{tabular}
\end{table}
Table 4: End2End SLU results in concept error rate (CER %, lower is better \(\downarrow\)) on the MEDIA corpus. “h300” and “h256” refer to a hidden size of 300 and 256 neurons respectively, while “mt” is a multi-task ASR-SLU training. “t2” means that a Softmax temperature of value 2 was applied.
without and with fine-tuning, provides interesting results considering that it is not a model specialized in French, the extra-large model and all the whisper models provide clearly inferior performances on the test data compared to French models. For the XLSR-53 extra-large model we hypothesize that the larger size of the model together with its multilingualism does not allow a good generalization on a specific French task like SLU. For whisper models, we hypothesize that poor performances are related to the particular strategy used for training such models. Given so poor performances of whisper models compared to French models, we decided to save energy and not fine-tuning them.
OAR.77868.stdout
### Automatic Speech Translation
Automatic speech-to-text translation (AST) consists of translating a speech utterance in a source language to a text in a target language. In this work, we are interested in translating directly from French speech into text in another language, without the use of transcriptions. We investigate two downstream applications for _LeBenchmark_ models: _hybrid_ models and _end-to-end_ fine-tuning. For the former, the pre-trained model is leveraged as a feature extractor i.e. frozen. For the latter, extra layers are appended to the pre-trained model, and the whole architecture is fine-tuned on the target dataset. Training an end-to-end AST model from a pre-trained speech encoder was first proposed in [94].
**Downstream datasets.** For both AST downstream strategies, we use the multilingual TEDx dataset [95]. It covers translation directions from French to three target languages: English (en), Spanish (es), and Portuguese (pt), with following training sizes: \(50\,\mathrm{h}\) (en), \(38\,\mathrm{h}\) (es), and \(25\,\mathrm{h}\) (pt). For end-to-end fine-tuning, we also present results for the CoVoST dataset V2 [96] containing 302 hours of French speech from CommonVoice version 4.0 translated to English.
**Hybrid downstream models and hyperparameters.** In this set of experiments, we focus on leveraging the pre-trained models as feature extractors, using their output speech representation as input for an end-to-end AST model which is trained from randomly initialized parameters. Inspired by [18; 29], this AST model is an encoder-decoder architecture which takes SSL features as input, passing them through a block of Linear-ReLU followed by 2 convolutional layers with strides of \([2,2]\) and kernel sizes of \([5,5]\). These 1D-convolutional layers reduce the sequence length by 4 which is then sent to a Transformer [90] model having 6 layers of encoder, 3 layers of decoder, and hidden dimension \(D=256\). This is inspired by the s2t_transformer_xs recipe from the fairseq s2t toolkit [97]. For each language pair, we train in total 13 end-to-end models which take as input features extracted from different SSL pre-trained models shown in Table 5. We normalize the punctuation of the text before building a \(1K\) unigram vocabulary using Sentencepiece[98] without pre-tokenization. For GPU efficiency, utterances having more than \(3,000\) frames are filtered out. Each of these AST models is trained for \(500\) epochs. For all our experiments, we exploit the Adam optimizer [99] whose initial learning rate is set to \(2e-3\). This learning rate is linearly increased for the first \(10K\) warm-up steps then decreased proportionally to the inverse square root of the step counter. The last 10 checkpoints are averaged and used for decoding with a beam size of 5. Table 5 reports the detokenized case-sensitive BLEU computed using sacreBLEU [100].
**End-to-end downstream models and hyperparameters.** End-to-end AST models are trained on SpeechBrain [42] using the HuggingFace Transformers [101] wav2vec 2.0 interface with spectogram augmentation enabled. The encoder stack is made of a wav2vec 2.0 model, followed by a linear projection of output dimension 512. The decoder stack is an 8-heads, 6-layers Transformer with feed forward projections of 2,048 neurons and an embedding size of 512. The weights for the wav2vec 2.0 model are initialized from one of the models in Table 2, and the model is trained with NLL loss. As for end-to-end ASR models (Section 5.1), two different instances of the Adam optimizer manage the weight updates: one dedicated to the wav2vec 2.0 module, the other one to the following layers. The learning rates are respectively \(1e-5\) and \(1e-4\). The models are trained on a single A100 80GB Nvidia GPU, for 50 (CoVoST), or 100 epochs (mTEDx). In all cases, sentences longer than 35 s are removed for GPU efficiency. For models trained on the mTEDx dataset, we found that it was beneficial for performance to remove layer-dropout and dropout within the wav2vec 2.0 stack during training. We hypothesize that this is due to the limited amount of data available for fine-tuning, as large architectures seemed to benefit the most from this modification. The total number of trainable parameters depends on the wav2vec 2.0 model used: it varies between 121.3M (base), 342.5M (large), and 989.7M (xlarge). Pre-tokenization strategy is the same as the Hybrid AST setup. Lastly, we do not use pre-trained
weights to initialize the AST decoder, and we do not partially freeze the wav2vec 2.0 encoder due to poor performance.
**Hybrid results analysis and discussion.** Results are presented in Table 5 and analyzed with the following aspects:
* **Monolingual versus multilingual**. Comparing SSL models of the same size (large models), training on monolingual data (LB models) seems to be beneficial in comparison with training on multilingual data (XLSR-53 and XLS-R models). From \(3K\) hours of French data, all _LeBenchmark_ large models outperform both XLSR-53 and XLS-R models.
* **Pre-training data**. Concerning the amount of monolingual data, we observe that with the same model size (base or large), SSL models tend to improve when the amount of pre-training data increases, except for the 14K-large model whose performance is on par with that of the 7K-large model. We suspect that adding too much read speech data (\(6,600h\)) might lead to stagnation in terms of BLEU scores when jumping from \(7K\) to \(14K\) hours of training data on the mTDx domain.
* **Model size**. Table 5 illustrates that with the same amount of pre-training data, larger models tend to be better than smaller ones for producing speech features. Surprisingly, xlarge models underperform large models, observed with both _LeBenchmark_ 14K and XLS-R. We suspect that this is because xlarge models' generated features are too high-dimensional for the task. Lastly, we observe that the 14K-light model significantly underperforms its base and large counterparts, hinting that it insufficiently represents speech signals due to its limited capacity.
**End-to-End results analysis and discussions.**
Table 6 and 7 present BLEU scores for the mTDx and CoVoST datasets respectively.
* **Monolingual versus multilingual.** Overall, we notice that the importance of the backbone model (monolingual or multilingual) is less important in end-to-end mode compared to hybrid mode (the XLS-R model is not too far behind the best-performing monolingual _LeBenchmark_ model). Nevertheless, between the two best-performing models for both CoVoST and mTEDx, _LeBenchmark_ 14K outperforms XLS-R in all settings. It should however be highlighted that XLS-R is a model covering 128 languages, with the potential to reach similarly good results for at least a small portion of its covered languages.
\begin{table}
\begin{tabular}{l l c c|c c|c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{2}{c}{**fr-en**} & \multicolumn{2}{c}{**fr-es**} & \multicolumn{2}{c}{**fr-pt**} \\ \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c}{**Test**} & \multicolumn{1}{c|}{**Dev**} & \multicolumn{1}{c|}{**Test**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c}{**Test**} \\ \hline \multirow{2}{*}{1K} & base & 9.18\(\pm\)0.36 & 8.98\(\pm\)0.36 & 5.09\(\pm\)0.27 & 5.64\(\pm\)0.30 & 0.39\(\pm\)0.05 & 0.49\(\pm\)0.08 \\ & large & 15.31\(\pm\)0.46 & 14.46\(\pm\)0.46 & 13.74\(\pm\)0.43 & 14.77\(\pm\)0.46 & 8.29\(\pm\)0.34 & 9.37\(\pm\)0.38 \\ \hline
2.7K & base & 15.09\(\pm\)0.49 & 14.69\(\pm\)0.48 & 13.27\(\pm\)0.43 & 14.04\(\pm\)0.43 & 4.72\(\pm\)0.27 & 5.51\(\pm\)0.28 \\ \hline \multirow{2}{*}{3K} & base & 15.05\(\pm\)0.49 & 14.80\(\pm\)0.47 & 13.19\(\pm\)0.44 & 14.27\(\pm\)0.44 & 4.44\(\pm\)0.29 & 4.72\(\pm\)0.25 \\ & large & 17.94\(\pm\)0.51 & 18.00\(\pm\)0.51 & 16.40\(\pm\)0.49 & 18.12\(\pm\)0.48 & 8.64\(\pm\)0.34 & 9.55\(\pm\)0.36 \\ \hline \multirow{2}{*}{7K} & base & 15.13\(\pm\)0.45 & 14.50\(\pm\)0.45 & 12.78\(\pm\)0.40 & 13.61\(\pm\)0.44 & 2.65\(\pm\)0.20 & 2.66\(\pm\)0.23 \\ & large & **19.23\(\pm\)**0.54 & **19.04\(\pm\)**0.53 & **17.59\(\pm\)**0.49 & **18.24\(\pm\)**0.49 & **9.68\(\pm\)**0.37 & **10.98\(\pm\)**0.41 \\ \hline \multirow{2}{*}{14K} & light & 10.31\(\pm\)0.38 & 10.92\(\pm\)0.43 & 9.83\(\pm\)0.33 & 10.52\(\pm\)0.42 & 4.96\(\pm\)0.31 & 5.79\(\pm\)0.33 \\ & large & **18.93\(\pm\)**0.40 & **18.97\(\pm\)**0.47 & **17.22\(\pm\)**0.41 & **18.12\(\pm\)**0.42 & 9.03\(\pm\)0.35 & 10.11\(\pm\)0.39 \\ & xlarge & 18.14\(\pm\)0.42 & 18.35\(\pm\)0.48 & 15.90\(\pm\)0.39 & 17.19\(\pm\)0.43 & 5.46\(\pm\)0.29 & 6.59\(\pm\)0.35 \\ \hline XLSR-53 & large & 7.81\(\pm\)0.33 & 6.75\(\pm\)0.29 & 0.49\(\pm\)0.13 & 0.52\(\pm\)0.08 & 0.43\(\pm\)0.07 & 0.36\(\pm\)0.05 \\ \hline \multirow{2}{*}{XLS-R} & large & 17.03\(\pm\)0.40 & 16.52\(\pm\)0.45 & 15.12\(\pm\)0.35 & 16.34\(\pm\)0.41 & 7.56\(\pm\)0.33 & 8.40\(\pm\)0.36 \\ & xlarge & 13.80\(\pm\)0.37 & 13.88\(\pm\)0.38 & 11.45\(\pm\)0.37 & 12.56\(\pm\)0.40 & 1.59\(\pm\)0.29 & 1.77\(\pm\)0.31 \\ \hline \hline \end{tabular}
\end{table}
Table 5: AST BLEU results (higher is better) of the feature extraction experiments (Hybrid with frozen SSL encoders) on the mTEDx dataset. The best results are in **bold**. Gray numbers denote the standard deviation computed using bootstrap re-sampling [102].
* **Pre-training data.** Focusing on monolingual models, and looking at results for large architectures only, we do not see any hints of saturation: _LeBenchmark_ models pre-trained with more speech data tend to provide better initializations for the AST task (Tables 6 and 7). Looking at base architectures, the exception for this seems to be the 2.7K-base model, which performs on par with 3K and 7K base and large models. This model differs from 3K by not including spontaneous speech. We hypothesize this pre-training setting could provide a better initialization for mTEDx and CoVoST datasets, which are made of prepared and read speech respectively.
* **Model size.** Focusing on the mTEDx results (Table 6), and looking at models trained with equal amounts of speech data, we observe that larger pre-trained models tend to increase AST performance (1K, 14K, XLS-R), with 3K and 7K being the two exceptions where we observe that large pre-trained models underperform compared to their base counterparts. The difference between base and large models on test sets for English, Spanish, and Portuguese are respectively -1, -1.5, -2.7 for 3K and -0.8, -0.6, -1.4 for 7K. The same is not observed in the hybrid setting (Table 5). We believe that, with the limited amount of trainable examples available on the mTEDx dataset, those results need to be taken with caution: on one side, larger models might be _less adaptable_ than their smaller counterparts, as they have more parameters that need adaptation with the same amount of fine-tuning data; on the other side we observe an improvement from large to xlarge using _LeBenchmark 2.0_, which contradicts this previous observation. However, in the setting where trainable data is abundantly available (Table 7), that counter-intuitive trend of results vanishes. This hints that the choice between different model sizes should also consider the amount of available data for task adaptation. Finally, it seems to always be beneficial for end-to-end AST fine-tuning to have a xlarge wav2vec 2.0 model, compared to large, but this marginal difference in performance adds a considerable overhead in the number of trainable parameters (647.1M extra trainable parameters). Lastly, we observe that the 14K-light model is a poor initialization choice for end-to-end AST. We believe this highlights how the capacity of the model is related to the encoding of high-abstraction level speech features: smaller Transformer stacks results in poor speech features (Table 5) and encoders (Tables 6 and 7). Indeed, Pasad et al. [103; 104] argue that the wa2vec 2.0 pretext task forces a drop in abstraction-level at the last layers. Due to this, the middle of the Transformer stack is where most of high-level (phonemic and word-level) information is encoded.
\begin{table}
\begin{tabular}{l l l l|c c|c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{2}{c}{**fr-en**} & \multicolumn{2}{c}{**fr-es**} & \multicolumn{2}{c}{**fr-pt**} \\ \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c|}{**Test**} & \multicolumn{1}{c|}{**Dev**} & \multicolumn{1}{c}{**Test**} & \multicolumn{1}{c}{**Dev**} & \multicolumn{1}{c}{**Test**} \\ \hline \multirow{2}{*}{1K} & base & 15.2\({}_{\pm 0.48}\) & 14.0\({}_{\pm 0.54}\) & 13.0\({}_{\pm 0.42}\) & 13.2\({}_{\pm 0.40}\) & 8.2\({}_{\pm 0.33}\) & 8.6\({}_{\pm 0.34}\) \\ & large & 16.7\({}_{\pm 0.49}\) & 16.6\({}_{\pm 0.46}\) & 15.3\({}_{\pm 0.46}\) & 16.1\({}_{\pm 0.45}\) & 9.4\({}_{\pm 0.33}\) & 10.7\({}_{\pm 0.38}\) \\ \hline \multirow{2}{*}{2.7K} & base & 18.9\({}_{\pm 0.52}\) & 18.7\({}_{\pm 0.52}\) & 17.9\({}_{\pm 0.50}\) & 17.8\({}_{\pm 0.49}\) & 11.7\({}_{\pm 0.39}\) & 12.3\({}_{\pm 0.40}\) \\ \hline \multirow{2}{*}{3K} & base & 17.9\({}_{\pm 0.48}\) & 17.9\({}_{\pm 0.51}\) & 16.8\({}_{\pm 0.49}\) & 17.1\({}_{\pm 0.46}\) & 11.3\({}_{\pm 0.42}\) & 12.4\({}_{\pm 0.42}\) \\ & large & 17.6\({}_{\pm 0.51}\) & 16.9\({}_{\pm 0.47}\) & 15.1\({}_{\pm 0.45}\) & 15.6\({}_{\pm 0.46}\) & 8.6\({}_{\pm 0.34}\) & 9.7\({}_{\pm 0.37}\) \\ \hline \multirow{2}{*}{7K} & base & 18.8\({}_{\pm 0.51}\) & 18.2\({}_{\pm 0.50}\) & 18.4\({}_{\pm 0.52}\) & 18.2\({}_{\pm 0.68}\) & 12.6\({}_{\pm 0.41}\) & 13.4\({}_{\pm 0.44}\) \\ & large & 20.1\({}_{\pm 0.52}\) & 19.0\({}_{\pm 0.57}\) & 17.4\({}_{\pm 0.52}\) & 18.8\({}_{\pm 0.49}\) & 10.7\({}_{\pm 0.37}\) & 12.0\({}_{\pm 0.41}\) \\ \hline \multirow{2}{*}{14K} & light & 6.5\({}_{\pm 0.27}\) & 5.9\({}_{\pm 0.28}\) & 5.7\({}_{\pm 0.27}\) & 5.7\({}_{\pm 0.26}\) & 3.0\({}_{\pm 0.21}\) & 2.9\({}_{\pm 0.17}\) \\ & large & 23.6\({}_{\pm 0.59}\) & 23.1\({}_{\pm 0.55}\) & 23.3\({}_{\pm 0.58}\) & 24.2\({}_{\pm 0.62}\) & 18.7\({}_{\pm 0.54}\) & 21.8\({}_{\pm 0.58}\) \\ & xlarge & **25.1\({}_{\pm 0.59}\)** & **24.4\({}_{\pm 0.60}\)** & **23.7\({}_{\pm 0.56}\)** & **25.5\({}_{\pm 0.59}\)** & **20.7\({}_{\pm 0.58}\)** & **23.7\({}_{\pm 0.62}\)** \\ \hline \multirow{2}{*}{XLS-R} & xlarge & 15.6\({}_{\pm 0.49}\) & 12.5\({}_{\pm 0.47}\) & 15.6\({}_{\pm 0.45}\) & 15.8\({}_{\pm 0.44}\) & 8.4\({}_{\pm 0.31}\) & 9.1\({}_{\pm 0.36}\) \\ \hline \multirow{2}{*}{XLS-R} & large & 19.2\({}_{\pm 0.51}\) & 18.0\({}_{\pm 0.63}\) & 19.2\({}_{\pm 0.53}\) & 19.7\({}_{\pm 0.51}\) & 13.4\({}_{\pm 0.47}\) & 14.9\({}_{\pm 0.44}\) \\ & xlarge & 23.4\({}_{\pm 0.58}\) & 22.7\({}_{\pm 0.55}\) & **23.3\({}_{\pm 0.61}\)** & **25.0\({}_{\pm 0.60}\)** & 19.3\({}_{\pm 0.54}\) & 21.3\({}_{\pm 0.56}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: AST end-to-end BLEU results (higher is better) for the mTEDx dataset. The best results are in **bold**. Gray numbers denote the standard deviation computed using bootstrap re-sampling [102].
### Automatic Emotion Recognition
Recent psychological studies suggest that emotion is needed and used in every aspect of our lives, from filtering the sensory information, our perception of an event, reasoning, and thus the decisions we make [105; 106]. The automatic recognition of human emotions from audio recordings is therefore a technology that can influence many areas such as education, healthcare, and entertainment. Although much progress has been made in emotion recognition in recent years, there are still challenges including different emotional expressions by different speakers, or different microphones, making AER not quite ready for everyday use [107]. SSL models being trained on large amounts of data, have been shown to be exceptionally good in addressing generalisation issues [14]. _LeBenchmark 2.0_ further evaluates such methods for French speech, with the newly trained models.
**Downstream datasets.** Following _LeBenchmark_, we used the RECOLA [108] and the AlloSat [109], which contain continuous conversations, and the Theradia corpus, which contains utterance-based conversations. Both the RECOLA and Allosat datasets contain spontaneous French speech. However, the RECOLA recordings are emotionally induced conversations recorded in a laboratory environment, whereas the AlloSat recordings are telephonic conversations. The annotations for both datasets are time-continuous dimensional. For RECOLA, the emotion dimensions are based on arousal (from passive to active), and valence (from negative to positive), sampled at 25 Hz rate. For Allosat, a frustration-to-satisfaction dimension is used with a sampling rate of 4 Hz. Since the AlloSat dataset contains continuous long audio files, we were not able to fine-tune them. The AlloSat dataset contains a total of 29,704 utterances (21 h), divided into 20,785 utterances (15 h) as training set, 4272 utterances (3 h) for development and 4643 utterances (3 h) as test partition. On the other hand, the RECOLA dataset is much smaller, with 9 files of 5 minutes each for the training, development, and test sets. Moreover, the continuous conversations used in these two datasets differ from utterance-based training of the used SSL models. Thus, we also used the Theradia corpus to investigate the effect of fine-tuning for emotion recognition. This dataset contains 61 senior participants, nine of whom had Mild Cognitive Impairments (MCIs). The participants performed digital cognitively stimulating exercises while interacting with a virtual assistant in a natural way. The Theradia corpus contains different dimensional annotations according to the appraisal theory, and emotion labels. The emotion labels are annotated based on the perceived intensity of the label on a scale from zero (not existent), to 100. We report results on the prediction of the ten most common core set labels in the Theradia corpus: relaxed, interested, frustrated, confident, satisfied, happy, annoyed, surprised, desperate, and anxious. The Theradia corpus contains 2,735 utterances (6 h) in total which are divided into 1,110 utterances as training partition, 851 utterances for validation, and 774 utterances for testing.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model** & **Size** & **Dev** & **Test** \\ \hline \multirow{2}{*}{1K} & base & 28.5\({}_{\text{-}0.21}\) & 27.9\({}_{\text{-}0.20}\) \\ & large & 30.1\({}_{\text{-}0.21}\) & 30.0\({}_{\text{-}0.21}\) \\ \hline
2.7K & base & 30.8\({}_{\text{-}0.21}\) & 30.2\({}_{\text{-}0.21}\) \\ \hline \multirow{2}{*}{3K} & base & 29.8\({}_{\text{-}0.21}\) & 29.4\({}_{\text{-}0.21}\) \\ & large & 29.4\({}_{\text{-}0.21}\) & 29.0\({}_{\text{-}0.21}\) \\ \hline \multirow{2}{*}{7K} & base & 30.1\({}_{\text{-}0.21}\) & 29.7\({}_{\text{-}0.20}\) \\ & large & 32.7\({}_{\text{-}0.22}\) & 32.5\({}_{\text{-}0.21}\) \\ \hline \multirow{2}{*}{14K} & light & 20.5\({}_{\text{-}0.18}\) & 20.0\({}_{\text{-}0.18}\) \\ & large & 32.1\({}_{\text{-}0.21}\) & 31.7\({}_{\text{-}0.21}\) \\ \multicolumn{2}{c}{} & large & **33.9\({}_{\text{-}0.22}\)** & **33.7\({}_{\text{-}0.21}\)** \\ \hline \hline \multirow{2}{*}{XLS-53} & large & 30.4\({}_{\text{-}0.21}\) & 29.6\({}_{\text{-}0.20}\) \\ \cline{2-2} & large & 30.6\({}_{\text{-}0.21}\) & 30.3\({}_{\text{-}0.21}\) \\ \multicolumn{2}{c}{} & large & 32.9\({}_{\text{-}0.21}\) & 32.5\({}_{\text{-}0.21}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: ST end-to-end BLEU results (higher is better) for the CoVoST dataset. The best results are in **bold**. Gray numbers denote the standard deviation computed using bootstrap re-sampling [102].
**Downstream models and hyperparameters.** The experiments are conducted using a one-to-one sequence model with either SSL representations or Mel filterbank features. The experiments for the RECOLA and AlloSat datasets consist of time-linear sequence-to-sequence prediction of continuous dimensions of emotion. A GRU with one layer and 32 hidden units was trained with CCC as the loss function, similar to [14]. On the other hand, for the experiments for the Theradia corpus, we used one linear layer trained with mean squared error as the loss function. We also tried using CCC as the loss function and GRU as the emotion prediction model for Theradia, but did not find any significant improvement in the results. Furthermore, for all the experiments, the training was done with the Adam optimizer with a learning rate of 0.001, and trained for 200 epochs with early stopping.
It should be noted that that the sampling rate of the dimensional annotations differs from the Mel features, which are sampled at a rate of 100 Hz, and the wav2vec representations, which are sampled at a rate of 50 Hz. Thus, during training for RECOLA and AlloSat datasets, the targets are resampled to match the sampling rate of the representations with linear interpolation to keep the graph active for backpropagation, while during testing, the outputs of the model are mapped to the sampling rate of the targets to keep the targets untouched for testing. On the other hand, for the Theradia corpus, the outputs of the model are averaged over the sequence, because each emotion label is defined per sequence and not continuously over the sequence.
**Results analysis and discussion.** The results are shown in Table 8. For the prediction of the frustration-satisfaction dimension of the AlloSat corpus, the best results were obtained for the 14K-xlarge model, showing that the use of more read speech from audio books and radio broadcasts, with more parameters, can be more effective for the continuous prediction of frustration-satisfaction. However, for the prediction of arousal and valence dimensions of emotion on the RECOLA corpus, the 14K-large and 14K-xlarge models achieve similar results. This may be because RECOLA contains clean speech, and thus the smaller parameter size may be sufficient to extract useful features for continuous emotion recognition on this dataset. The prediction of emotion labels for the Theradia corpus also shows a similar trend, with the frozen 14K models performing best. However, when fine-tuning the wav2vec 2.0 models, most of them perform better than their frozen counterparts (except for 2.7K-base, 3K-base, and 7K-base), but similarly to each other. The better performance of the fine-tuned wav2vec 2.0 models is consistent with the literature. Indeed, fine-tuning specializes the representations, which results in better AER performance for a particular data distribution. However, we may lose the ability to generalize to other data distributions [110]. On the other hand, the similarity of the results for the fine-tuned models across different data types, amounts, and _LeBenchmark_ architectures suggests that pre-training the wav2vec 2.0 models with more data or more parameters does not affect the results of fine-tuning
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Representation** & **Satisfaction** & **Arousal** & **Valence** \\ \hline Mel Filter Bank &.413 &.313 &.258 \\ \hline WavLM-large &.537 & **.690** &.450 \\ XLS-R-large &.279 &.455 &.002 \\ XLS-R-xlarge &.415 &.311 &.229 \\ \hline
1K-base &.487 &.427 &.055 \\
1K-large &.021 &.018 &.001 \\ \hline
2.7K-base &.596 &.629 &.455 \\ \hline
3K-base &.602 &.358 &.007 \\
3K-large &.040 &.097 &.000 \\ \hline
7K-base &.470 &.335 &.116 \\
7K-large &.050 &.009 &.037 \\ \hline
14K-light &.518 &.614 &.348 \\
14K-large &.462 &.664 & **.466** \\
14K-xlarge & **.657** &.649 &.437 \\ \hline \hline \end{tabular}
\begin{tabular}{l c c} \hline \hline
**Representation** & **Frozen** & **Fine-tuned** \\ \hline Mel Filter Bank &.075 (.120) & - \\ \hline WavLM-large &.166 (.158) &.241 (.136) \\ XLS-R-large &.086 (.064) &.269 (.159) \\ XLSR-xlarge &.072 (.059) &.205 (.132) \\ \hline
1K-base &.151 (.103) &.246 (.161) \\
1K-large &.001 (.002) & **.319 (.172)** \\ \hline
2.7K-base &.083 (.094) &.013 (.015) \\
3K-base &.061 (.069) &.019 (.016) \\
3K-large &.002 (.004) &.224 (.151) \\ \hline
7K-base &.106 (.082) &.000 (.000) \\
7K-large &.000 (.002) &.230 (.143) \\ \hline
14K-light & **.241 (.133)** &.283 (.129) \\
14K-large &.190 (.127) &.229 (.151) \\
14K-xlarge &.237 (.131) &.226 (.145) \\ \hline \hline \end{tabular}
\end{table}
Table 8: The automatic emotion recognition results are expressed in terms of concordance correlation coefficient (CCC, higher is better). The top table describes the continuous prediction of frustration-satisfaction, arousal and valence dimensions for the AlloSat and RECOLA corpora, with frozen representations. The results on the bottom table, describe the average (and standard deviation) emotion prediction across the core set emotion labels of the THERADIA corpus.
for predicting emotion labels on the THERADIA corpus.
### Syntactic Analysis
The syntactic analysis task (also known as parsing) is a staple task in natural language processing, and historically one of the first. Syntactic parsing consists of assigning a syntactic structure to a given sentence. Thus, it is a structured prediction task. Corpora annotated with syntactic trees are key to data-driven linguistic studies, syntactic trees may also provide useful features for downstream tasks. We focus on evaluating the self-supervised model trained on the task of joint automatic speech recognition and syntactic analysis. The traditional technique to obtain the syntactic structure from speech would be to use a pipeline approach. Using an ASR model to get the transcription and then using a pre-trained model such as BERT to predict the syntactic structure. However, this method removes important features contained in the signal for the syntactic predictions, such as the prosody. Moreover, it has been shown that E2E speech parsing models perform better, despite having much fewer parameters than pipeline-based parsers [111].
**Downstream datasets.** We use the CEFC-ORFEO [112] dataset. This corpus is a collection of multiple subcorpora[113; 114; 115; 116; 117; 118; 119; 120] annotated in syntactic dependencies in the _conll_ format. This corpus contains a wide variety of speech situations, such as store owner/customer interactions in a cheese shop, or informal conversations between friends. We removed the TCOF sub-corpus from the dataset, as it was included in the pretraining data for the _LeBenchmark 2.0_ models. The Orfeo treebank contains both human-annotated syntactic trees and automatically annotated syntactic trees [121], henceforth referred to as silver data. Partitions are built such that the dev and test set only contain gold trees, i.e. all silver trees are in the training set.
**Downstream models and hyperparameters.** The downstream model is wav2tree [111]. This model is composed of an encoder module, an ASR module, and a syntax module. It performs joint syntactic analysis and automatic speech recognition in a multitasking manner. The encoder is composed of three fully connected layers of size 1024 in the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Frozen** & **WER \%** & **CER \%** & **POS** & **UAS** & **LAS** \\ \hline \multirow{2}{*}{XLSR-53-_large_} & No & 44.57 & 26.77 & 66.40 & 61.20 & 54.69 \\ & Yes & 97.51 & 77.81 & — & — & — \\ \hline
1K-_large_ & No & 45.99 & 28.60 & 65.18 & 59.98 & 53.48 \\ & Yes & 99.82 & 93.80 & — & — & — \\ \hline
1K-_base_ & No & 51.26 & 31.32 & — & — & — \\ & Yes & 80.35 & 48.67 & — & — & — \\ \hline
3K-_large_ & No & 44.81 & 27.55 & 65.81 & 61.54 & 55.11 \\ & Yes & 99.94 & 88.33 & — & — & — \\ \hline
3K-_base_ & No & 100 & 88.40 & — & — & — \\ & Yes & 81.54 & 48.16 & — & — & — \\ \hline
7K-_large_ & **No** & **42.39** & **26.59** & **67.15** & **63.34** & **56.94** \\ & Yes & 99.81 & 78.59 & — & — & — \\ \hline
7K-_base_ & No & 100 & 88.40 & — & — & — \\ & Yes & 70.70 & 39.97 & — & — & — \\ \hline
14K-_small_ & No & 71.08 & 40.51 & — & — & — \\ & Yes & 76.56 & 46.28 & — & — & — \\ \hline
14K-_large_ & No & 43.04 & 27.12 & 66.71 & 62.72 & 56.45 \\ & Yes & 52.16 & 30.51 & — & — & — \\ \hline
14K-_xlarge_ & No & 44.82 & 28.00 & 65.09 & 61.17 & 54.61 \\ & Yes & 45.43 & 26.51 & 66.60 & 60.63 & 53.94 \\ \hline \hline \end{tabular}
\end{table}
Table 9: End to end result for the syntactic analysis task in terms of part of speech tagging accuracy, unlabeled attachment score (UAS), and labeled attachment score (LAS) metrics (higher is better). Results are correlated to the speech recognition results expressed in CER and WER.
fine-tuning setting. In the frozen setting, the encoder is a two-layer bi-LSTM with a hidden size 1024 and a dropout of 0.4 between the two layers. In wav2tree, the speech recognition module has two purposes, the first one is the normal speech recognition task, acquiring the transcriptions. The second one is the segmentation of the representation learned by wa2vec2. The CTC scheme labels each representation with either a letter, a blank, or a whitespace. The wav2tree model uses the whitespace information to segment the representation. The syntax module is composed of two elements. The first one uses the segmentation from the ASR module to create word-level representations from the encoder representation via a 2-layer LSTM with a hidden size of 500. These word-level representations are then used by a two-layer bi-LSTM with a hidden size of 800 and then are classified into three tasks in parallel with a simple linear layer. The first one predicts the part of speech (POS) of the word, the second one predicts the head of the current word (UAS) and the last one predicts the syntactic function (e.g. subject, dependent,...), i.e. the relationship between the head and the dependent. Each model is trained with a batch size of 8, except for the fine-tuning of the xlarge which is trained with a batch size of 2 and a gradient accumulation of 4, in order to maintain the comparability of results. The ASR module is trained with a CTC loss and the classification tasks are trained with a negative likelihood loss. The optimizer is AdaDelta [122] with a learning rate of 1 and \(\rho\) of 0.95. Each model is optimized on the word error rate and once it decreases below a threshold of 50, the training activates the syntax learning and is then optimized on the LAS metrics.
**Parsing evaluation metrics.** The three metrics used for the parsing task are the f-measure for part of speech tagging labeled POS, the unlabeled attachment score UAS which is the accuracy of the syntactic link between the word, i.e. is the word correctly attached to its head. The last one is the labeled attachment score LAS extending the UAS metrics by also taking into account the nature of the link (root, subject,...).
**Results analysis and discussion.** All results for the syntactic analysis task are depicted in table 9. The parsing result is heavily correlated to the speech recognition metrics. This is an expected behavior since a correct tree cannot be produced if some words are missing. The WER and CER clearly reflect the difficulty of the used dataset. This shows the inherent difficulty of the dataset, as with a similar architecture, a score of 10 % of WER is obtained on CommonVoice 6.1 [123]. The difficulty of the dataset implies that none of the base models can get good enough results to start learning the parsing task. All the models also need to be finetuned on the dataset with the notable exception of the 14K-xlarge suggesting that the pre-trained representation of this model is general enough to fit more out-of-domain data like the CEFC-Orfeo dataset.
We observe that the quantity of data used to pre-train the model is important and seems to follow the classic machine learning paradigm that more data and scale are better. However, the best model for this task is the 7K-large and not the 14k-large or xlarge. Our hypothesis is that this model is trained on a more balanced distribution of type of speech (read, prepared and spontaneous), thus being more suited to learning good representations for spontaneous speech. Another interesting fact is that the 14k-large outperforms the 14k-xlarge. This may be simply because the bigger model needs more data, whereas the smaller one is more easily tunable to the downstream dataset. One of the most surprising results is the one on the 3K-base and 7k-base, where the models perform better without fine-tuning. We also compare to the multilingual XLSR model. The multilingual model has slightly worse performance compared to most of our models, but exhibits similar properties such as the need to finetune it on the downstream corpus to reach good performance.
### Automatic Speaker Verification
Automatic Speaker Verification (ASV) refers to the task of verifying the identity claimed by a speaker from that person's voice [124]. In ASV, deep neural networks have brought about significant advancements in voice representations, outperforming the previous state-of-the-art \(i\)-vector framework [125]. One of these DNN approaches seeks to extract a high-level speaker representation, known as _speaker embedding_ directly from acoustics excerpts. To achieve this, DNN models are trained through a speaker identification task, where speech segments are classified into distinct speaker identities. Each layer of the DNN is trained to extract relevant information for discriminating between different speakers, and one of the hidden layers is used as the speaker embedding. One of the main advantages is that speaker embedding produced by the DNN can generalize well to speakers beyond those present in the training set. The benefits of speaker embedding, in terms of speaker detection accuracy, have been demonstrated during the last evaluation campaigns: NIST SRE [126; 127; 128] and VoxCeleb 2020 [129; 130; 131].
Recently, much progress has been achieved in ASV through the utilization of SSL models. In [16], the authors propose a novel approach: instead of exclusively using the representations from the final layer of the SSL model, they employ a weighted average of the representations from all hidden layers of the SSL model. This approach allows for harnessing speaker-related information embedded throughout the entire SSL model.
**Downstream datasets.** For evaluation, we used the Fabiole dataset [132]. Fabiole is a french speaker verification dataset that has been collected for use to highlight the importance of "speaker factor" in forensic voice comparison. The Fabiole dataset contains a total of 7,372 segments from 130 male native French speakers. Due to the absence of an established evaluation protocol, we took the initiative to create one ourselves. We removed all the segments with less than 2 seconds of voice and those that exceed 12 seconds of voice. Then, we randomly select 300,000 target pairs (i.e. enrollment and test segments that target same speakers) and 300,000 non-target pairs (i.e. enrollment and test segment that non-target same speaker). We trained the systems using ESTER-1 [133], ESTER-2 [134], ETAPE [66] and REPERE [135] training datasets (that correspond to 2.911 speakers and more than 250 hours of data). Voice Activity Detection (VAD) processing was not applied on the training datasets. Additionnaly, we applied data augmentation into the training process by incorporating noise from the MUSAN dataset and reverberation using the RIR dataset [136]. Equal Error Rate (EER) and the Detection Cost Function (DCF) are used as the performance criterion of ASV. EER is the threshold value such that the false acceptance rate and miss rate are equal. Whereas DCF is defined as a weighted sum:
\[C_{det}=C_{Miss}\times P_{Miss|Target}\times P_{Target}+C_{False alarm}\times P_{FalseAlarm|NonTarget}\times P_{NonTarget}, \tag{1}\]
with the prior probabilities \(P_{Target}\) and \(P_{NonTarget}=1-P_{Target}\) of target and impostor speakers, respectively. The relative costs of detection errors in this function are the costs of miss \(C_{Miss}\) and false alarm errors \(C_{FalseAlarm}\). These parameters were set as follows: \(P_{Target}=0.01\) (or \(P_{Target}=0.001\)), \(C_{Miss}=1\) and \(C_{FalseAlarm}=1\).
**Downstream models and hyperparameters.** We use the ECAPA-TDNN classifier [137], which uses cutting-edge techniques: Multilayer Feature Aggregation (MFA), Squeeze-Excitation (SE) and residual blocks. This classifier, when combined with SSL models, has demonstrated impressive performance [138] in ASV. Our ECAPA-TDNN has the following parameters: the number of SE-Res2Net Blocks is set to 3 with dilation values 2, 3 and 4, the number of filters in the convolutional frame layers is set to 512 and embedding layer size is set to 256. The training lasted for 8 epochs with the _AdamW_ optimizer. We trained all the models with Circle Margin Loss and set the margin to 0.35. During the training process, we randomly sampled 3s segments from each utterance to construct a training batch. We
\begin{table}
\begin{tabular}{l c c c} \multicolumn{4}{c}{Corpora: Fabiole} \\ \multicolumn{4}{c}{Metric: EER and minDCF \(\downarrow\)} \\ \hline \hline
**Representation** & **EER** & **minDCF\({}^{-10}\)** & **minDCF\({}^{-100}\)** \\ \hline XLSR-53-large & 6.68 & 0.492 & 0.677 \\ \hline
1K-base & 8.27 & 0.556 & 0.722 \\
1K-large & 6.75 & 0.508 & 0.705 \\ \hline
3K-base & 4.82 & 0.374 & 0.567 \\
3K-large & 5.06 & 0.374 & 0.521 \\ \hline
7K-base & 4.73 & 0.364 & 0.538 \\
7K-large & 5.23 & 0.383 & 0.575 \\ \hline
14K-large & **3.54** & **0.297** & **0.480** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Results for the downstream task of speaker verification. Performance are expressed in terms of Equal Error Rate (EER, lower is better) and Minimum of the Detection Cost Function (minDCF, lower is better).
remind that the ECAPA-TDNN takes as input a weighted average of the representations from all hidden layers of the SSL model.
**Results analysis and discussions.** All results for the ASV task are depicted in Table 10.XLSR-53 was used as a baseline. The findings can be summarized as follows:
* **Monolingual versus Multilingual.** We observed that except for 1K models, systems trained on monolingual models (i.e. _LeBenchmark_ ) achieved better performance than the multilingual model (XLSR-53). The best monolingual model (_LeBenchmark_ -14K-large) obtained 3.54% of EER while the multilingual model (XLSR-54-large) obtained 6.68% EER.
* **Pre-training data.** Focusing on monolingual models, we observed a link between performance and the quantity of pre-training data. LeBenchmark models pre-trained with larger speech datasets tend to provide better performance. Indeed, the LeBenchmark model trained on 1,000 hours of data (LeBenchmark-1k-large) obtained 6.75% of EER, while the LeBenchmark model trained on 14.000 hours of data (LeBenchmark-14k-large) obtained 3.54% of EER.
* **Model size.** Always focusing on monolingual models, we observed that, except for LeBenchmark-1k model, for the same amount of pre-training data, base models tend to obtain better performance than larger models. Figure 1 shows the contribution from each layer of various SSL models. We remind that LeBenchmark base models contain 12 layers contrary to LeBenchmark large models that contain 24 layers. In general, we observe that speaker-related information is most pronounced in the first layers (lower layers) of the SSL model. Even if the speaker-related informations is most pronounced in the first layer, we notice that for LeBenchmark base models, all layers contribute to the construction of the representations. In contrast, for LeBenchmark large models, the higher layers contribute less compared to the lower layers.
### Summary of results
Table 11 summarizes the best models for each evaluated task and setup according to our experiments. Overall, the 7K and newly introduced 14K models perform the best across the whole benchmark. In all cases, large models obtain better performance. However, the 14K-xlarge underperforms compared to the smaller 14K-large in syntactic analysis and speech recognition. The smaller models, including base and light ones, simply offer the worst performance while being still acceptable. In all cases, _LeBenchmark 2.0_ models reach a higher level of performance than the multilingual XLSR large and large systems.
Figure 1: The visualization of the normalized weight values in the proposed architecture. Each weight can be interpreted as the relevance of a given layer for the ASV task. Earlier layers are particularly relevant for ASV.
## 6 Carbon Footprint
This section gives an overview of the carbon footprint of the SSL pre-training. The fine-tuning footprint is omitted as it was performed in many different and heterogeneous platforms making it impossible to compute proper estimates.
The carbon footprint of each model may be estimated following the protocol defined by T. Parcollet et al. [139]. In practice, it is a function of the PUE of the compute infrastructure, the total energy consumed, and the carbon emission rate of the energy grid. The Jean-Zay supercomputer is energy efficient with a PUE of 0.65 (including heat transfer to nearby infrastructures). We only consider the energy consumed by the GPU and CPU. Power draw measurements are taken every five seconds and then averaged. France's carbon rate is 52 gCO2/kWh [140].
Table 11 reports the total energy consumed as well as the estimated carbon emissions of all _LeBenchmark 2.0_ models. First, it is quite clear that the carbon footprint of training large SSL models is far from being negligible, even in countries with a relatively clean energy mix. As supported by previous research, the location of the training must be considered as a critical design choice when starting such a project as the total CO\({}_{2}\) emissions can easily be multiplied by four to height if gas and oil are integrated in the mix. Finally, one may wonder if the extra kWhs thrown at the model are worth it given the relatively small downstream performance improvement between the 3K-large, 7K-large, and 7K-xlarge, in contrast to the energy consumption being multiplied by a factor 3.5.
## 7 Conclusion
_LeBenchmark 2.0_ establishes new foundations for the development of French SSL-equipped speech technologies. Following the three steps of the lifecycle of any SSL model, we gathered and documented the largest available collection of unlabeled French speech for SSL pre-training, we trained three new pre-trained SSL models for a total of 10 available checkpoints, and we evaluated them on two new tasks increasing the total number of tasks to six. _LeBenchmark 2.0_ models are shared with the community via the HuggingFace Hub.
## 8 Acknowledgements
This work was performed using HPC resources from GENCI-IDRIS (Grant 2020-A0091012047). This work benefited from the 'Grand Challenge Jean Zay' program and was also partially supported by MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). This paper was also partially funded by the European Commission through the SELMA project under grant number 957017, and UTTER project under grant number 101070631.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Train. Time** & **GPUs** & **Energy** & **CO\({}_{2}\)** \\ & & _(hours)_ & & _(kWh)_ & _(kg)_ \\ \hline
1K-_base_ & 250 & 4 Tesla V100 & 195.0 & 10.5 \\
1K-_large_ & 925 & 4 Tesla V100 & 721.5 & 37.5 \\
2.7K-_base_ & 128 & 32 Tesla V100 & 682.2 & 35.4 \\
3K-_base_ & 128 & 32 Tesla V100 & 682.2 & 35.4 \\
3K-_large_ & 341 & 32 Tesla V100 & 1,817.5 & 94.5 \\
7K-_base_ & 123 & 64 Tesla V100 & 1,535.0 & 79.8 \\
7K-_large_ & 211 & 64 Tesla V100 & 4,501.0 & 234 \\
14K-_light_ & 156 & 32 Tesla V100 & 1,497.6 & 77.8 \\
14K-_large_ & 436 & 64 Tesla V100 & 8,371.2 & 435 \\
14K-_xlarge_ & 525 & 104 Tesla A100 & 16,511.2 & 859 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Summary of the best SSL models for each downstream task (left table) as well as estimates of the energy in kilowatt hour (kwH) and CO\({}_{2}\) equivalent in kilogram produced by the training of the _LeBenchmark 2.0_ models (right table). |
2308.00145 | Exponential decay of the critical points in a discrete model of
polyacetylene | In this paper we consider stationary states of the SSH model for infinite
polyacetylene chains that are homoclinic or heteroclinic connections between
two-periodic dimerized states. We prove that such connections converge
exponentially fast to the corresponding asymptotic periodic states. | David Gontier, Adechola E. K. Kouande, Éric Séré | 2023-07-31T20:27:25Z | http://arxiv.org/abs/2308.00145v1 | # Exponential decay of the critical points in a discrete model of polyacetylene
###### Abstract.
In this paper we consider stationary states of the SSH model for infinite polyacetylene chains that are homoclinic or heteroclinic connections between two-periodic dimerized states. We prove that such connections converge exponentially fast to the corresponding asymptotic periodic states.
David Gontier: CEREMADE, Universite Paris-Dauphine, PSL University,75016 Paris, France Email: [email protected]
###### Contents
* 1 Introduction
* 2 Critical points for the infinite SSH model, and main result
* 2.1 The SSH model
* 2.2 The SSH energy difference
* 2.3 Critical points for the infinite SSH model, and main result
* 2.4 Strategy of the proof
* 3 Smoothness and Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\)
* 3.1 The spectrum of homoclinic and heteroclinic configurations
* 3.2 Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\)
* 4 Proofs of the Lemmas
* 4.1 Proof of Lemma 2.5
* 4.2 Proof of Lemma 2.6
* 4.3 Proof of Lemma 2.7
* 4.4 Coercivity of the Hessian at the dimerized configuration
## 1. Introduction
The goal of this article is to prove an exponential decay property for critical points in the SSH model. This model was introduced by Su, Schrieffer and Heeger to describe polyacetylene, which is a long one-dimensional chain of carbon (and hydrogen) atoms. In this model, the chain can lower its total energy by dimerizing. This physical phenomenon was first predicted by Peierls [12] (see also [1]) and is now known as the Peierls distorsion or dimerization.
Actually, Kennedy and Lieb [6], and Lieb and Nachtergaele [9] proved that the minimizers of the SSH energy associated to closed polyacetylene with an even number \(L\) of carbon atoms are always \(2\)-periodic. When \(L=2\) mod \(4\) or when \(L=0\) mod \(4\) is large enough, these minimizers are dimerized, in the sense that they are \(2\)-periodic, but not \(1\) periodic. In this situation, there
Introduction
The \(n\)-th atom is a quantum mechanical object, which is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object. The \(n\)-th atom is a classical object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object. The \(n\)-th atom is a classical object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object. The \(n\)-th atom is a classical object object object, and it is a classical object object object object. The \(n\)-th atom is a classical object object object object, and it is a classical
where \(T=T(\{\mathbf{t}\})\) is the \(L\times L\) hermitian matrix
\[T=T(\{\mathbf{t}\}):=\begin{pmatrix}0&t_{1}&0&0&\cdots&t_{L}\\ t_{1}&0&t_{2}&\cdots&0&0\\ 0&t_{2}&0&t_{3}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&t_{L-2}&0&t_{L-1}\\ t_{L}&0&\cdots&0&t_{L-1}&0\end{pmatrix}, \tag{5}\]
and where we set \(T_{-}=-T\mathds{1}(T<0)\). The first term in (4) is the distortion energy of the atoms: this energy depends quadratically on the distances \(d_{n}\) between successive atoms, but these distances are themselves affine functions on the amplitudes \(t_{n}\). The parameter \(\mu>0\) is the rigidity of the chain, and our units are such that the jump amplitude between two atoms is \(1\) when their distorsion energy is minimal. The second term in (4) models the electronic energy of the valence electrons under the Hamiltonian \(T\). It results from the identity
\[\min_{0\leq\gamma\to-\gamma^{\prime}\leq 1}2\mathrm{Tr}\left(T\gamma\right)=2 \mathrm{Tr}\left(T\mathds{1}(T<0)\right)=-2\mathrm{Tr}(T_{-}).\]
The minimization on the left-hand side was performed for all one-body density matrices \(\gamma\) representing non-interacting electrons. The condition \(0\leq\gamma\leq 1\) is the Pauli principle, and the \(2\) factor stands for the spin.
### The SSH energy difference
Let us fix a configuration \(\mathbf{t}\), and consider the energy difference functional \(\mathcal{F}_{\mathbf{t}}^{(L)}\), defined by
\[\mathcal{F}_{\mathbf{t}}^{(L)}(\mathbf{h}):=\mathcal{E}^{(L)}(\mathbf{t}+ \mathbf{h})-\mathcal{E}^{(L)}(\mathbf{t})=\frac{\mu}{2}\sum_{n=1}^{L}(h_{n}+2t _{n}-2)h_{n}-2\mathrm{Tr}((T+H)_{-}-T_{-}), \tag{6}\]
where \(T=T(\{\mathbf{t}\})\) and \(H=T(\{\mathbf{h}\})\) are the hermitian matrices constructed from \(\{\mathbf{t}\}\) and \(\{\mathbf{h}\}\) respectively. Clearly, \(\{\mathbf{t}\}\) is a critical point of \(\mathcal{E}^{(L)}\) iff \(\{\mathbf{0}\}\) is a critical point of \(\mathcal{F}_{\mathbf{t}}^{(L)}\). We have substracted the quantity \(\mathcal{E}^{(L)}(\mathbf{t})\) in order to have a finite energy difference at the limit \(L\to\infty\). Actually, Eqn. (6) admits a clear analog as \(L\to\infty\), namely, for two bounded sequences \(\mathbf{t}:\mathbb{Z}\to\mathbb{R}^{+}\) and \(\mathbf{h}:\mathbb{Z}\to\mathbb{R}\), assuming that \(h\in\ell^{1}(\mathbb{Z},\mathbb{R})\) and that \((T+H)_{-}-T_{-}\) is trace-class as an operator acting on \(\ell^{2}(\mathbb{Z},\mathbb{C})\), we set
\[\boxed{\mathcal{F}_{\mathbf{t}}(\mathbf{h}):=\frac{\mu}{2}\sum_{n\in \mathbb{Z}}(h_{n}+2t_{n}-2)h_{n}-2\mathrm{Tr}((T+H)_{-}-T_{-})}. \tag{7}\]
Now, the operator \(T:=T(\{\mathbf{t}\})\) (and similarly for \(T+H\)) is acting on the infinite dimensional Hilbert space \(\ell^{2}(\mathbb{Z},\mathbb{C})\), whose coefficients in the canonical basis are
\[\forall n\in\mathbb{Z},\quad T_{n,n+1}=T_{n+1,n}=t_{n},\qquad T_{i,j}=0\quad \text{if }|i-j|\neq 1.\]
In what follows, we denote by bold letters \(\mathbf{a},\mathbf{t},\mathbf{h},\mathbf{u},...\) sequences from \(\mathbb{Z}\) to \(\mathbb{R}\), and by capital letters \(A,T,H,U,...\) the corresponding operators acting on \(\ell^{2}(\mathbb{Z})\).
The fact that the map \(\mathcal{F}_{\mathbf{t}}\) is well defined when \(\mathbf{t}\) is a homoclinic or heteroclinic configuration is given in the next two lemmas (see Section 3.1 for the proof).
**Lemma 2.1**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic configuration such that \(\mathbf{t}\geq\tau\) for some \(\tau>0\). Then there is a positively oriented contour \(\mathscr{C}\) in the complex plane, a constant \(C\geq 0\) and a constant \(\eta>0\) so that, for all \(\{\mathbf{h}\}\) with \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\eta\) and for all \(z\in\mathscr{C}\), the operator \((z-(T+H))\) is invertible with \(\|(z-(T+H))^{-1}\|_{\mathrm{op}}\leq C\). In addition,_
\[-(T+H)_{-}=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{z}{z-(T+H)} \mathrm{d}z.\]
The contour \(\mathscr{C}\) is independent of \(\mathbf{h}\), but depends on \(\mathbf{t}\). This Lemma might be surprising, as the energy \(0\) can be in the spectrum of \(T+H\). Actually, we will prove the following:
* If \(\{\mathbf{t}\}\) is a homoclinic configuration with \(\mathbf{t}\geq\tau>0\), then \(0\) is never in the spectrum of \(T+H\), for \(\mathbf{h}\) small enough.
* If \(\{\mathbf{t}\}\) is a heteroclinic configuration with \(\mathbf{t}\geq\tau>0\), then \(0\) is always an isolated eigenvalue of \(T+H\) of multiplicity \(1\), for all \(\mathbf{h}\) small enough.
In both cases, one can choose a contour \(\mathscr{C}\) of the form (see Figure 1)
\[\mathscr{C}:=(\Sigma+\mathrm{i})\to(\Sigma-\mathrm{i})\to(-g/2-\mathrm{i})\to( -g/2+\mathrm{i})\to(\Sigma+\mathrm{i}), \tag{8}\]
where \(\Sigma\) is a negative enough number, and where \(g=\mathrm{dist}(0,\sigma(T)\setminus\{0\})\) is the distance between \(0\) and the (rest of the) spectrum.
In the heteroclinic situation, \(0\) is a stable (or topologically protected) eigenvalue: it is unperturbed by the addition of \(H\). Actually, any \(T\) matrix coming from a heteroclinic configuration can be seen as a junction between two SSH chains with different indices [16, 5, 7].
This Lemma allows to prove that \(\mathcal{F}_{\mathbf{t}}\) is well-defined and smooth around \(\{\mathbf{0}\}\). We refer to Section 3.2 for the proof of the following result.
**Lemma 2.2**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\), and let \(\eta>0\) and \(\mathscr{C}\) be a contour as in Lemma 2.1. The map \(\mathbf{h}\mapsto\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is \(C^{\infty}\) on \(\{\mathbf{h},\|\mathbf{h}\|_{\ell_{1}}\leq\eta\}\). In addition, there is \(C\geq 0\) so that, for all \(\{\mathbf{h}\}\) with \(\|\mathbf{h}\|_{\ell^{1}}<\eta\), we have_
\[\left|\mathcal{F}_{\mathbf{t}}(\mathbf{h})-L_{\mathbf{t}}(\mathbf{h})-\frac{1 }{2}H_{\mathbf{t}}(\mathbf{h},\mathbf{h})\right|\leq C\|\mathbf{h}\|_{\ell^{2} }^{3},\]
_where \(L_{\mathbf{t}}\) (differential of \(\mathcal{F}_{\mathbf{t}}\)) is the continuous linear form on \(\ell^{1}(\mathbb{Z})\) defined by (we set \(\Gamma_{\mathbf{t}}:=\mathds{1}(T<0)\) the spectral projector of \(T\) on \(\mathbb{R}^{-}\))_
\[L_{\mathbf{t}}(\mathbf{h}):=\mu\sum_{n\in\mathbb{Z}}(t_{n}-1)h_{n}+2\mathrm{Tr }\left(\Gamma_{\mathbf{t}}H\right),\]
_and \(H_{\mathbf{t}}\) (hessian of \(\mathcal{F}_{\mathbf{t}}\)) is the bilinear form on \(\ell^{1}(\mathbb{Z})\) defined by_
\[H_{\mathbf{t}}(\mathbf{h},\mathbf{k}):=\mu\sum_{n\in\mathbb{Z}}h_{n}k_{n}+2 \mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}H\frac{1}{z-T}K \frac{1}{z-T}\mathrm{d}z\right). \tag{9}\]
_In addition, the bilinear map \(H_{\mathbf{t}}\) can be extended continuously as a bilinear map on \(\ell^{2}(\mathbb{Z})\)._
### Critical points for the infinite SSH model, and main result
We can now define the notion of critical points for the infinite SSH model.
**Definition 2.3**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic configuration such that \(\mathbf{t}\geq\tau\) for some \(\tau>0\). We say that \(\{\mathbf{t}\}\) is a critical point if \(L_{\mathbf{t}}\) is the null map. Equivalently, using that_
\[\mathrm{Tr}(\Gamma_{\mathbf{t}}H)=\sum_{n\in\mathbb{Z}}h_{n}\left[(\Gamma_{ \mathbf{t}})_{n+1,n}+(\Gamma_{\mathbf{t}})_{n,n+1}\right]=2\sum_{n\in\mathbb{Z }}h_{n}\left(\Gamma_{\mathbf{t}}\right)_{n,n+1},\]
_the configuration \(\mathbf{t}\) is a critical point if_
\[\forall n\in\mathbb{Z},\ \ \ \boxed{t_{n}=1-\frac{4}{\mu}\left(\Gamma_{ \mathbf{t}}\right)_{n,n+1}}\,. \tag{10}\]
Figure 1. Contours used for the Cauchy integral, for a homoclinic configuration (Left), and a heteroclinic configuration (Right). The main difference is that \(0\) is in the spectrum in the heteroclinic case. We prove below that \(\sigma_{\mathrm{ess}}(T)=[-2W,-2\delta]\cup[2\delta,2W]\), and that the spectrum of \(T\) is symmetric with respect to \(0\).
We implicitly used that \(\Gamma\) is symmetric and real-valued. With this definition, the dimerized configuration \(\mathbf{t}^{+}\) is a homoclinic critical point. The kink state constructed in [3] is a heteroclinic critical point. Now we can provide our main result, which states that all homoclinic or heteroclinic critical points of \(\mathcal{F}_{\mathbf{t}}\) converge exponentially fast to \(\mathbf{t}^{+}\) at \(+\infty\).
**Theorem 2.4**.: _Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic critical point, and let \(\{\mathbf{u}\}\) be the sequence \(u_{n}:=t_{n}-t_{n}^{+}\). If \(\mathbf{u}\) is square integrable at \(+\infty\) (\(\mathbf{u}\in\ell^{2}(\mathbb{Z}^{+})\)), then \(\mathbf{u}\) is exponentially localized at \(+\infty\): there is \(C\geq 0\) and \(\alpha>0\) so that_
\[|u_{n}|\leq C\mathrm{e}^{-\alpha n}.\]
Of course, the same applies in the \(-\infty\) direction, and we have exponential convergence to \(\mathbf{t}^{+}\) or \(\mathbf{t}^{-}\) at \(-\infty\) depending whether the critical configuration is homoclinic or heteroclinic.
We note that there exist critical points (that is configurations satisfying (10)), which do not converge to \(\mathbf{t}^{\pm}\) at infinity. For instance, in [2, 3], the authors show the existence of kink-like solutions for a closed chain with an odd number of atoms (see also Figure 2). This solution satisfies the critical point equation (10), but, seeing the closed chain as a periodic configuration, it does not converge to \(\mathbf{t}^{\pm}\) at infinity.
Note that this exponential localization was already known for the exactly soluble continuum model of Takayama, Lin-Liu and Maki [17].
### Strategy of the proof
Let us briefly explain the strategy to prove Theorem 2.4. We break the proof in several Lemmas, that we prove later in Section 4.
Let \(\{\mathbf{t}\}\) be a homoclinic or heteroclinic critical point, and let \(\{\mathbf{u}\}\) be the sequence \(u_{n}:=t_{n}-t_{n}^{+}\), so that \(T=T^{+}+U\). The configurations \(\{\mathbf{t}\}\) and \(\{\mathbf{t}^{+}\}\) are critical points, hence satisfy the Euler-Lagrange equations
\[t_{n}=1-\frac{4}{\mu}\Gamma_{n,n+1},\qquad t_{n}^{+}=1-\frac{4}{\mu}\Gamma_{n,n+1}^{+},\]
with \(\Gamma:=\Gamma_{\mathbf{t}}=\mathds{1}(T^{+}+U<0)\), and \(\Gamma^{+}:=\mathds{1}(T^{+}<0)\). According to Lemma 2.1, the expression of \(\Gamma\) and \(\Gamma_{\mathbf{t}}\) can be written using the Cauchy's residual formula using the _same_ contour \(\mathscr{C}\), that is
\[\Gamma=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{\mathrm{d}z}{z-(T^{+ }+U)},\quad\text{and}\quad\Gamma^{+}=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{ C}}\frac{\mathrm{d}z}{z-T^{+}},\]
and where the operators in the integrand are uniformly bounded in \(z\in\mathscr{C}\). Since \(u_{n}=t_{n}-t_{n}^{+}\), we obtain (we use the resolvent formula in the last line)
\[u_{n} =\frac{4}{\mu}\left(\Gamma^{+}-\Gamma\right)_{n,n+1}=\frac{4}{\mu }\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\left[\frac{1}{z-T^{+}}- \frac{1}{z-(T^{+}+U)}\right]\mathrm{d}z\right)_{n,n+1}\] \[=\frac{-4}{\mu}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \left(\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\right)\mathrm{d}z\right)_{n,n+1}+ \frac{1}{\mu}(\mathbf{Q}_{U}(U,U))_{n},\]
with the remainder term
\[(\mathbf{Q}_{U}(\mathbf{u}_{1},\mathbf{u}_{2}))_{n}=-4\left(\frac{1}{2 \mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T^{+}}U_{1}\frac{1}{z-(T^{+}+U)}U _{2}\frac{1}{z-T^{+}}\mathrm{d}z\right)_{n,n+1}.\]
Figure 2. A localized kink appears in the chain with \(L=101\) carbon atoms.
Multiplying by \(\mu\) and reordering the terms, this can be also written as
\[\forall n\in\mathbb{Z},\quad(\mathscr{L}\mathbf{u})_{n}=(\mathbf{Q}_{U}(\mathbf{u },\mathbf{u}))_{n}, \tag{11}\]
with the linear map
\[(\mathscr{L}\mathbf{u})_{n}=\mu u_{n}+4\left(\frac{1}{2\mathrm{i}\pi}\oint_{ \mathscr{C}}\left(\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\right)\mathrm{d}z\right) _{n,n+1}. \tag{12}\]
Formally, if \(\mathbf{v}\) is another real sequence, with corresponding operator \(V\), we have
\[\langle\mathbf{v},(\mathcal{L}\mathbf{u})\rangle=\sum_{n\in\mathbb{Z}}v_{n}( \mathscr{L}\mathbf{u})_{n}=\mu\sum_{n\in\mathbb{Z}}v_{n}u_{n}+2\mathrm{Tr} \left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T^{+}}U\frac{1}{z- T^{+}}V\right)\mathrm{d}z \tag{13}\]
and we recognize the expression of the Hessian \(H_{\mathbf{t}^{+}}(\mathbf{v},\mathbf{u})\) in (9). Unfortunately, the previous computations is formal, since \(\mathbf{u}\) is not necessary in \(\ell^{2}(\mathbb{Z})\). We only know that it is square integrable at \(+\infty\). Actually, for a heteroclinic configuration, we have \(\mathbf{u}\notin\ell^{2}(\mathbb{Z})\), since \(\mathbf{u}\) does not decay to \(0\) at \(-\infty\). In order to bypass this difficulty, we regularize \(\mathbf{u}\) using appropriate cut-off functions.
For \(\alpha>0\) and \(s\in\mathbb{Z}\) that we choose later (\(\alpha\) will be small, and \(s\) will be large), we introduce the function \(\theta_{\alpha,s}:\mathbb{Z}\to\mathbb{R}^{+}\) defined by
\[\theta_{\alpha,s}(n)=\min\{\mathrm{e}^{\alpha n},\mathrm{e}^{\alpha s}\}= \begin{cases}\mathrm{e}^{\alpha n},&\text{if }n<s\\ \mathrm{e}^{\alpha s},&\text{if }n\geq s\end{cases}, \tag{14}\]
and denote by \(\Theta_{\alpha,s}\) the multiplication operator by \(\theta_{\alpha,s}\), defined by \((\Theta_{\alpha,s})_{n,m}=\theta_{\alpha,s}(n)\delta_{n,m}\).
In what follows, we will consider the sequence \(\widetilde{\mathbf{u}}_{\alpha,s}\), defined by
\[(\widetilde{u}_{\alpha,s})_{n}:=\theta_{\alpha,s}(n)\theta_{\alpha,s}(n+1)u_{ n},\quad\text{with corresponding operator}\quad\widetilde{U}_{\alpha,s}=\Theta_{\alpha,s}U\Theta_{\alpha,s}.\]
Since \(\mathbf{u}\) is bounded and square integrable at \(+\infty\) the vector \(\widetilde{\mathbf{u}}_{\alpha,s}\) is in \(\ell^{2}(\mathbb{Z})\) for all \(\alpha>0\) and all \(s\in\mathbb{Z}\). We also introduce the operator \(\widetilde{T}^{+}_{\alpha,s}\) acting on \(\ell^{2}(\mathbb{Z})\), and defined in the canonical basis by
\[\forall n\in\mathbb{Z},\qquad\left(\widetilde{T}^{+}_{\alpha,s}\right)_{n,n+1 }:=\frac{\theta_{\alpha,s}(n)}{\theta_{\alpha,s}(n+1)}t^{+}_{n},\quad\left( \widetilde{T}^{+}_{\alpha,s}\right)_{n+1,n}:=\frac{\theta_{\alpha,s}(n+1)}{ \theta_{\alpha,s}(n)}t^{+}_{n},\]
and \(\left(\widetilde{T}^{+}_{\alpha,s}\right)_{i,j}=0\) if \(|i-j|\neq 1\). Note that \(\widetilde{T}^{+}_{\alpha,s}\) is not symmetric. Using that
\[\frac{\theta_{\alpha,s}(n)}{\theta_{\alpha,s}(n+1)}=\begin{cases}\mathrm{e}^{ -\alpha}\quad\text{if}\quad n<s\\ 1\quad\text{if}\quad n\geq s,\end{cases}\]
we see that \(\widetilde{T}^{+}_{\alpha,s}\) has the matrix form
\[\widetilde{T}^{+}_{\alpha,s}=\begin{pmatrix}\ddots&\ddots&\ddots&\ddots&\ddots &\ddots&\ddots\\ \ddots&0&t^{+}_{s-2}\mathrm{e}^{-\alpha}&0&0&0&\ddots\\ \ddots&t^{+}_{s-2}\mathrm{e}^{\alpha}&0&t^{+}_{s-1}\mathrm{e}^{-\alpha}&0&0& \ldots\\ \ddots&0&t^{+}_{s-1}\mathrm{e}^{\alpha}&0&t^{+}_{s}&0&\ddots\\ \ddots&0&0&t^{+}_{s}&0&t^{+}_{s+1}&\ddots\\ \ddots&0&0&0&t^{+}_{s+1}&0&\ddots\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots\\ \end{pmatrix}. \tag{15}\]
This operator is constructed to satisfy the following commutation relations (see Section 4.1 for the proof).
**Lemma 2.5**.: _The operator \(\widetilde{T}^{+}_{\alpha,s}\) satisfies_
\[\Theta_{\alpha,s}T^{+}=\widetilde{T}^{+}_{\alpha,s}\Theta_{\alpha,s}\quad\text{ and}\quad T^{+}\Theta_{\alpha,s}=\Theta_{\alpha,s}\left(\widetilde{T}^{+}_{\alpha,s}\right)^{*}. \tag{16}\]
_There is \(\alpha^{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha_{*}\), all \(s\in\mathbb{Z}\), and all \(z\in\mathscr{C}\), the operators \(z-\widetilde{T}^{+}_{\alpha,s}\) and \(z-(\widetilde{T}^{+}_{\alpha,s})^{*}\) are invertible, with \(\|(z-\widetilde{T}^{+}_{\alpha,s})^{-1}\|_{\mathrm{op}}\leq C\) and \(\|(z-(\widetilde{T}^{+}_{\alpha,s})^{*})^{-1}\|_{\mathrm{op}}\leq C\). In addition, we have_
\[\Theta_{\alpha,s}\frac{1}{z-T^{+}}=\frac{1}{z-\widetilde{T}^{+}_{\alpha,s}} \Theta_{\alpha,s},\quad\text{and}\quad\frac{1}{z-T^{+}}\Theta_{\alpha,s}= \Theta_{\alpha,s}\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}.\]
We multiply (11) on the left by \(\theta_{s}(n)\), and on the right by \(\theta_{s}(n+1)\). Using that, for any operator \(A\) on \(\ell^{2}(\mathbb{Z})\), we have \(\theta_{\alpha,s}A_{n,n+1}\theta_{\alpha,s}(n+1)=(\Theta_{\alpha,s}A\Theta_{ \alpha,s})_{n,n+1}\), and the fact that
\[\Theta_{\alpha,s}\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\Theta_{\alpha,s}=\frac{ 1}{z-\widetilde{T}^{+}_{\alpha,s}}\underbrace{\Theta_{\alpha,s}U\Theta_{ \alpha,s}}_{=\widetilde{U}_{\alpha,s}}\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s })^{*}},\]
we obtain an equation of the form
\[\left(\widetilde{\mathscr{L}}_{\alpha,s}\widetilde{\mathbf{u}}_{\alpha,s} \right)_{n}=\left(\widetilde{\mathbf{Q}}_{\alpha,s,U}(\mathbf{u},\mathbf{u}) \right)_{n}, \tag{17}\]
where \(\widetilde{\mathscr{L}}_{\alpha,s}\) is the operator defined on \(\ell^{2}(\mathbb{Z})\) by
\[\forall\widetilde{\mathbf{v}}\in\ell^{2}(\mathbb{Z}),\quad\left(\widetilde{ \mathscr{L}}_{\alpha,s}\widetilde{\mathbf{v}}\right)_{n}:=\mu(\widetilde{v}_ {\alpha,s})_{n}+4\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\left( \frac{1}{z-\widetilde{T}^{+}_{\alpha,s}}\widetilde{V}\frac{1}{z-(\widetilde{T }^{+}_{\alpha,s})^{*}}\right)\mathrm{d}z\right)_{n,n+1}, \tag{18}\]
and with the right-hand side given by
\[(\widetilde{\mathbf{Q}}_{\alpha,s,U}(\mathbf{u}_{1},\mathbf{u}_{2}))_{n}=-4 \left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-\widetilde{T}^{+}_ {\alpha,s}}(\Theta_{\alpha,s}U_{1})\frac{1}{z-(T^{+}+U)}(U_{2}\Theta_{\alpha,s })\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}\mathrm{d}z\right)_{n,n+1}. \tag{19}\]
The exponential decay is a consequence of the following Lemmas.
**Lemma 2.6**.: _The operator \(\mathscr{L}\) defined in (12), seen as an operator from \(\ell^{2}(\mathbb{Z})\) to itself, is bounded symmetric with bounded inverse. There is \(\alpha_{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha_{*}\) and all \(s\in\mathbb{Z}\), the operator \(\widetilde{\mathscr{L}}_{\alpha,s}\) defined in (18), seen as an operator from \(\ell^{2}(\mathbb{Z})\) to itself, is bounded with bounded inverse._
Note that the operator \(\widetilde{\mathscr{L}}_{\alpha,s}\) is not symmetric for \(\alpha>0\). We refer to Section 4.2 for the proof. A key property that we use in the proof is the fact that the Hessian \(H_{\mathbf{t}^{+}}\) is coercive (see Proposition 4.1 below). Due to the equality \(\langle\mathbf{v},\mathscr{L}\mathbf{v}\rangle_{\ell^{2}}=H_{\mathbf{t}^{+}}( \mathbf{v},\mathbf{v})\) for \(\mathbf{v}\in\ell^{2}(\mathbb{Z})\) (see (13)), this implies that \(\mathscr{L}\) is invertible.
In order to control the right-hand side of (17), we use the following result (see Section 4.3 for the proof).
**Lemma 2.7**.: _There is \(\alpha_{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha_{*}\) and all \(s\in\mathbb{Z}\), we have_
\[\forall\mathbf{u}_{1},\mathbf{u}_{2}\in\ell^{\infty}(\mathbb{Z})\cap\ell^{2}( \mathbb{Z}^{+}),\qquad\left\|\widetilde{Q}_{\alpha,s,U}(\mathbf{u}_{1}, \mathbf{u}_{2})\right\|_{\ell^{2}(\mathbb{Z})}\leq C\|\theta_{\alpha,s} \mathbf{u}_{1}\|_{\ell^{4}}\|\theta_{\alpha,s}\mathbf{u}_{2}\|_{\ell^{4}}.\]
We can now prove the exponential decay of \(\mathbf{u}\). From (17) and the two Lemmas, we get that there is \(C\geq 0\) and \(\alpha^{*}>0\) so that, for all \(0<\alpha\leq\alpha^{*}\) and all \(s\in\mathbb{Z}\), we have
\[\|\widetilde{\mathbf{u}}_{\alpha,s}\|_{\ell^{2}}^{2}\leq C\|\theta_{\alpha,s} \mathbf{u}\|_{\ell^{4}}^{4}. \tag{20}\]
Concerning the left-hand side, we note that \(\theta_{\alpha,s}\) is increasing so that \(\theta_{\alpha,s}(n)\theta_{\alpha,s}(n+1)\geq\theta_{\alpha,s}^{2}(n)\). Hence
\[\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}\leq\|\widetilde{\mathbf{u} }_{\alpha,s}\|_{\ell^{2}}^{2}.\]
Let us now bound the right-hand side. We fix \(\varepsilon:=\frac{1}{2\sqrt{C}}\), where \(C\) is the constant appearing in (20). Since \(\mathbf{u}\) goes to \(0\) at \(+\infty\), there is \(M\) large enough so that \(|u_{n}|<\varepsilon\) for all \(n\geq M\). This gives
\[\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{4}}^{4} =\sum_{n\leq M}\theta_{\alpha,s}^{4}(n)|u_{n}|^{4}+\sum_{n>M} \theta_{\alpha,s}^{4}(n)|u_{n}|^{4}\leq\|\mathbf{u}\|_{\ell^{\infty}}^{4}\sum_ {n\leq M}\theta_{\alpha,s}^{4}(n)+\varepsilon^{2}\sum_{n>M}\theta_{\alpha,s}^{ 4}(n)|u_{n}|^{2}\] \[=\|\mathbf{u}\|_{\ell^{\infty}}^{4}\sum_{n\leq M}\mathrm{e}^{4 \alpha n}+\varepsilon^{2}\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}= \|\mathbf{u}\|_{\ell^{\infty}}^{4}\frac{\mathrm{e}^{4\alpha M}}{1-\mathrm{e}^ {-4\alpha}}+\varepsilon^{2}\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}.\]
Plugging these inequalities in (20) gives
\[(1-C\varepsilon^{2})\|\theta_{\alpha,s}^{2}\mathbf{u}\|_{\ell^{2}}^{2}\leq\| \mathbf{u}\|_{\ell^{\infty}}^{4}\frac{\mathrm{e}^{4\alpha M}}{1-\mathrm{e}^{- 4\alpha}}.\]
With our choice of \(\varepsilon\), the quantity \(1-C\varepsilon^{2}=\frac{1}{2}\) is positive. The right-hand side is a bound independent of \(s\in\mathbb{Z}\). We can take the limit \(s\to\infty\), and conclude that
\[\big{(}\mathrm{e}^{2\alpha n}u_{n}\big{)}_{n\in\mathbb{Z}}\in\ell^{2}(\mathbb{ Z}),\quad\text{with}\quad\left\|\big{(}\mathrm{e}^{2\alpha n}u_{n}\big{)}_{n\in \mathbb{Z}}\right\|_{\ell^{2}(\mathbb{Z})}\leq 2\|\mathbf{u}\|_{\ell^{\infty}}^{4} \frac{\mathrm{e}^{4\alpha M}}{1-\mathrm{e}^{-4\alpha}}.\]
This proves as wanted that the sequence \(\mathbf{u}\) is exponentially decaying at \(+\infty\).
## 3. Smoothness and Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\)
In this section, we prove Lemma 2.1, which states that \(\mathbf{h}\mapsto(T+H)_{-}\) is smooth locally around \(\mathbf{0}\), whenever \(T\) is a homoclinic or heteroclinic configurations.
We first record a useful Lemma that we will use many times throughout the article. In what follows, we denote by \(\mathcal{B}:=\mathcal{B}(\ell^{2}(\mathbb{Z}))\) the set of bounded operators acting in \(\ell^{2}(\mathbb{Z})\), and by \(\mathfrak{S}_{p}:=\mathfrak{S}_{p}(\ell^{2}(\mathbb{Z}))\) the \(p\)-Schatten class: \(A\in\mathfrak{S}_{p}\) iff A is a compact operator with \(\|A\|_{\mathfrak{S}_{p}}:=\mathrm{Tr}(|A|^{p})^{1/p}<+\infty\). The set \(\mathfrak{S}_{\infty}\) is simply the set of compact operators, with \(\|A\|_{\mathfrak{S}_{\infty}}=\|A\|_{\mathrm{op}}\).
**Lemma 3.1**.: _Let \(\mathbf{a}\) be a sequence from \(\mathbb{Z}\) to \(\mathbb{R}\), and let \(A\) be the corresponding operator._
* _If_ \(\mathbf{a}\in\ell^{\infty}\)_, then_ \(A\) _is a bounded operator (_\(A\in\mathcal{B}\)_), and_ \(\|A\|_{\mathrm{op}}\leq 2\|\mathbf{a}\|_{\ell^{\infty}}\) _;_
* _If_ \(\mathbf{a}\) _goes to_ \(0\) _at_ \(\pm\infty\)_, then_ \(A\) _is compact (_\(A\in\mathfrak{S}_{\infty}\)_) ;_
* _If_ \(\mathbf{a}\in\ell^{p}(\mathbb{Z})\) _for some_ \(1\leq p<\infty\)_, then_ \(A\) _is in the Schatten class_ \(\mathfrak{S}_{p}\)_, and_ \[\|A\|_{\mathfrak{S}^{p}}\leq 2\|\mathbf{a}\|_{\ell^{p}}.\]
Proof.: For the first part, we note that, for all \(\psi\in\ell^{2}(\mathbb{Z})\), we have
\[|\langle\psi,A\psi\rangle_{\ell^{2}}|=\left|\sum_{n\in\mathbb{Z}}a_{n}(\overline {\psi_{n}}\psi_{n+1}+\overline{\psi_{n+1}}\psi_{n})\right|\leq\|\mathbf{a}\|_{ \ell^{\infty}}\sum_{n\in\mathbb{Z}}\big{(}|\psi_{n}|^{2}+|\psi_{n+1}|^{2}\big{)} =2\|\mathbf{a}\|_{\ell^{\infty}}\|\psi\|_{\ell^{2}}^{2},\]
where we used that \(\overline{a}b+a\overline{b}\leq|a|^{2}+|b|^{2}\) in the middle inequality.
For the second part, we note that the operator \(A\) is the limit, for the operator norm, of the finite-rank operators \(A^{N}\) associated with the truncated configurations \(a^{N}:=(\mathbf{1}_{-N\leq n\leq N}\,a_{n})_{n\in\mathbb{Z}}\). Hence \(A\) is compact.
For the last part, we first prove the result for \(p=1\). We have, by duality,
\[\|A\|_{\mathfrak{S}_{1}} =\sup_{\stackrel{{ K\in\mathbb{R}}}{{|K|_{\mathrm{op }}=1}}}|\mathrm{Tr}(AK)|=\sup_{\stackrel{{ K\in\mathbb{R}}}{{|K|_{ \mathrm{op}}=1}}}\left|\sum_{n\in\mathbb{Z}}a_{n}(K_{n+1,n}+K_{n,n+1})\right|\] \[\leq\|\mathbf{a}\|_{\ell^{1}}\sup_{\stackrel{{ K\in\mathbb{R}}}{{|K|_{\mathrm{op}}=1}}}\sum_{n\in\mathbb{Z}}(|K_{n+1,n}|+|K_{n,n+1}|) \leq 2\|\mathbf{a}\|_{\ell^{1}}.\]
We used in the last line that \(|K_{n,n+1}|=|\langle e_{n},Ke_{n+1}\rangle|\leq\|K\|_{\mathrm{op}}\). Finally, to conclude the proof, we proceed by interpolation using Riesz-Thorin interpolation theorem for Schatten spaces (see [14, Remark 1 p.23] and [13, p.115] for the version with \(\mathcal{B}\) instead of \(\mathfrak{S}_{\infty}\)).
### The spectrum of homoclinic and heteroclinic configurations
In order to prove that \(\mathcal{F}_{\mathbf{t}}\) is smooth, we first study the spectrum of the operator \(T\) when \(\mathbf{t}\) is such a configuration. We treat the two cases separately.
#### 3.1.1. Homoclinic configurations
Let \(\{\mathbf{t}\}\) be a homoclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\). Then, we can write \(\mathbf{t}=\mathbf{t}^{+}+\mathbf{u}\), where we recall that \(\mathbf{t}^{+}\) is the dimerized configuration \(t_{n}^{+}=W+(-1)^{n}\delta\) with \(\delta>0\), and where the sequence \(\mathbf{u}\) goes to \(0\) at \(\pm\infty\).
We have \(T=T^{+}+U\). The operator \(T^{+}\) has purely essential spectrum, of the form (see for instance [4] and references therein)
\[\sigma\left(T^{+}\right)=\sigma_{\mathrm{ess}}\left(T^{+}\right)=[-2W,-2 \delta]\cup[2\delta,2W].\]
In particular, \(T^{+}\) has a spectral gap of size \(4\delta\) around \(0\). On the other hand, since \(\mathbf{u}\) goes to \(0\) at \(\pm\infty\), \(U\) is compact, see Lemma 3.1. We thus deduce from Weyl's theorem that
\[\sigma_{\mathrm{ess}}(T)=\sigma_{\mathrm{ess}}(T^{+})=[-2W,-2\delta]\cup[2 \delta,2W]. \tag{21}\]
In particular, \(0\notin\sigma_{\mathrm{ess}}(T)\). In addition, we claim that \(0\) is not an eigenvalue of \(T\). More specifically, we have the following.
**Lemma 3.2**.: _Let \(\{\mathbf{t}\}\) be_ **any** _configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\) (in particular, all coefficients \(t_{n}\) are non null). Assume there is \(N_{0}\in\mathbb{N}\) and \(0<\kappa<1\) so that_
\[\textbf{(Homoclinic case)}\quad\forall|n|\geq N_{0},\quad\left|\frac{t_{2n+1}}{t _{2n}}\right|\leq\kappa.\]
_Then \(0\) is not an eigenvalue of \(T\). Conversely, if_
\[\textbf{(Heteroclinic case)}\quad\forall n\geq N_{0},\quad\left|\frac{t_{2n+1}}{t _{2n}}\right|\leq\kappa\quad\text{and}\quad\left|\frac{t_{-2n}}{t_{-2n-1}} \right|\leq\kappa,\]
_then \(0\) is an eigenvalue of \(T\) of multiplicity \(1\)._
For a homoclinic (resp. heteroclinic) configurations, the first (resp. second) condition is satisfied with \(\kappa=\frac{W-\delta}{W+\delta}<1\).
Proof.: The eigenvalue equation \(T\psi=0\) reads
\[\forall n\in\mathbb{Z},\quad t_{n}\psi_{n}+t_{n+1}\psi_{n+2}=0.\]
We obtain directly
\[\psi_{2n}=(-1)^{n}\prod_{m=1}^{n}\left(\frac{t_{2m-2}}{t_{2m-1}}\right)\psi_{ 0},\quad\psi_{2n+1}=(-1)^{n}\prod_{m=1}^{n}\left(\frac{t_{2m-1}}{t_{2m}} \right)\psi_{1}.\]
The vector space \(\{\psi,\ T\psi=0\}\) is therefore \(2\) dimensional, since \(\psi\in\{T\psi=0\}\) can be recovered from its values \(\psi_{0}\) and \(\psi_{1}\), and we have \(\mathrm{Ker}(T)=\{T\psi=0\}\cap\ell^{2}(\mathbb{Z})\).
Let us first consider the homoclinic case, and let \(\psi\in\{T\psi=0\}\). Since \(|t_{2n}/t_{2n+1}|\geq\kappa^{-1}>1\) for \(n\geq N_{0}\), we have \(|\psi_{2N_{0}+2k}|\geq|\psi_{2N_{0}}|\kappa^{-k}\) as \(k\to\infty\), so \(\psi\) cannot be square integrable at \(+\infty\), unless \(\psi_{2N_{0}}=0\), which is equivalent to \(\psi_{0}=0\). Similarly, we have \(|\psi_{-2N_{0}-2k+1}|\geq|\psi_{-2N_{0}+1}|\kappa^{-k}\) as \(k\to\infty\), so \(\psi\) cannot be square integrable at \(-\infty\), unless \(\psi_{-2N_{0}+1}=0\), which gives \(\psi_{1}=0\) as well. So \(\mathrm{Ker}(T)=\{0\}\).
In the heteroclinic case, the same reasoning shows that we must have \(\psi_{0}=0\). However, given \(\psi_{1}\in\mathbb{R}\), the function \(\psi\) with \(\psi_{2n+1}=(-1)^{n}\prod_{m=1}^{n}\left(\frac{t_{2m-1}}{t_{2m}}\right)\psi_{1}\) and \(\psi_{2n}=0\) is a square integrable non null eigenvector. In this case, \(\dim\mathrm{Ker}(T)=1\).
**Remark 3.3**.: _In the heteroclinic case, the corresponding normalized eigenvector \(\psi\) is sometimes called an_ edge state_, or_ interface state _or_ zero mode_. As shown in the proof, it is exponentially decaying at \(\pm\infty\): there is \(C\geq 0\) and \(\beta:=-\log(\kappa)>0\) so that \(|\psi_{n}|\leq C\mathrm{e}^{-\beta|n|}\). It is always exponentially decaying, even though the sequence \(\mathbf{t}\) may converge to \(\mathbf{t}^{\pm}\) very slowly at \(\pm\infty\). Actually, we do not require \(\mathbf{t}\) to be a critical point here._
_Note that it is only supported on the odd integers: \(\psi_{2n}=0\) for all \(n\in\mathbb{Z}\). In particular, the corresponding projector \(Z:=|\psi\rangle\langle\psi|\) satisfies_
\[\forall n\in\mathbb{Z},\qquad Z_{n,n+1}=Z_{n+1,n}=0.\]
Let us return to the homoclinic case. We proved that \(0\notin\sigma(T)\). Let \(g:=\operatorname{dist}(0,\sigma(T))\) be the distance between \(0\) and the spectrum of \(T\), and set \(\eta:=g/8\). Let \(\mathbf{h}\) be any perturbation with \(\|\mathbf{h}\|_{\infty}\leq\eta\). Then \(\|H\|_{\mathrm{op}}\leq 2\eta\) by Lemma 3.1. In particular, the spectrum of \(T+H\) is \(2\eta\)-close to the one of \(T\), hence \(\sigma(T+H)\cap[-g/2,g/2]=\emptyset\).
Let us consider the positively oriented contour \(\mathscr{C}\) in (8). We deduce first that \((z-(T+H))\) is invertible for all \(z\in\mathscr{C}\), and with \(\|(z-(T+H))^{-1}\|_{\mathrm{op}}\leq C\) for a constant \(C\) independent of \(z\in\mathscr{C}\). Also, from the Cauchy residual formula, we have
\[\mathds{1}(T+H<0)=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{\mathrm{d}z }{z-(T+H)},\]
and
\[(T+H)_{-}=-(T+H)\mathds{1}(T+H<0)=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \frac{z}{z-(T+H)}\mathrm{d}z.\]
#### 3.1.2. Heteroclinic configurations
Now, let \(\{\mathbf{t}\}\) be a heteroclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\). First, we claim that \(0\notin\sigma_{\mathrm{ess}}(T)\).
**Lemma 3.4**.: _Let \(\mathbf{t}\) be a heteroclinic configuration. Then_
\[\sigma_{\mathrm{ess}}(T)=[-2W,-2\delta]\cup[2\delta,2W].\]
_In particular, \(0\notin\sigma_{\mathrm{ess}}(T)\)._
Proof.: Introduce the sequence \(\widetilde{\mathbf{t}}\) with \(\widetilde{t_{n}}=t_{n}\) if \(n\neq 0\), and \(\widetilde{t_{0}}=0\). We denote by \(\widetilde{T}\) the corresponding operator, and set \(K:=T-\widetilde{T}\). In a matrix format, the decomposition \(T=\widetilde{T}+K\) reads
\[\left(\begin{array}{cccc|c}\ddots&\ddots&0\\ \ddots&0&t_{-1}&0\\ 0&t_{-1}&0&t_{0}&0\\ \hline 0&t_{0}&0&t_{1}&0\\ &&0&t_{1}&0&\ddots\\ &&&0&\ddots&\ddots\end{array}\right)=\left(\begin{array}{cccc|c}\ddots&\ddots&0 \\ \ddots&0&t_{-1}&0\\ 0&t_{-1}&0&0&0\\ \hline 0&0&0&t_{1}&0\\ &&0&t_{1}&0&\ddots\\ &&&0&\ddots&\ddots\end{array}\right)+\left(\begin{array}{cccc|c}\ddots&\ddots& 0\\ \ddots&0&0&0\\ 0&0&t_{0}&0&0\\ \hline 0&t_{0}&0&0&0\\ &&0&0&\ddots\\ &&0&\ddots&\ddots\end{array}\right)\]
The operator \(K\) is of rank \(2\), hence is compact, so \(\sigma_{\mathrm{ess}}(T)=\sigma_{\mathrm{ess}}(\widetilde{T})\) by Weyl's theorem. In addition, the operator \(\widetilde{T}\) is of the form \(\widetilde{T}=\widetilde{T}_{L}\oplus\widetilde{T}_{R}\) acting on \(\ell^{2}(\mathbb{Z})\sim\ell^{2}(\mathbb{Z}^{-})\oplus\ell^{2}(\mathbb{Z}^{+})\), hence
\[\sigma_{\mathrm{ess}}(\widetilde{T})=\sigma_{\mathrm{ess}}(\widetilde{T}_{L} )\bigcup\sigma_{\mathrm{ess}}(\widetilde{T}_{R}).\]
Let us first focus on the right operator \(\widetilde{T}_{R}\). The hopping amplitudes \(\widetilde{t_{n}}\) for \(n\geq 1\) are of the form \(\widetilde{t_{n}}=t_{n}^{+}+u_{n}\) with \(\lim_{n\to\infty}u_{n}=0\). So, with obvious notation, \(\widetilde{T}_{R}=\widetilde{T}_{R}^{+}+U_{R}\). The sequence \((u_{n})\) goes to zero, so \(U_{R}\) is a compact operator (the proof is similar than for Lemma (3.1)), and \(\sigma_{\mathrm{ess}}(\widetilde{T}_{R})=\sigma_{\mathrm{ess}}(\widetilde{T}_ {R}^{+})\). Finally, reasoning as before and introducing the cut compact operator \(K^{+}:=T^{+}-\widetilde{T}^{+}\), we have
\[\sigma(T^{+})=\sigma_{\mathrm{ess}}(T^{+})=\sigma_{\mathrm{ess}}(\widetilde{T}_ {L}^{+})\cup\sigma_{\mathrm{ess}}(\widetilde{T}_{R}^{+}).\]
In addition, since \(t_{-n}^{+}=t_{n}^{+}\) for the dimerized configuration \(\mathbf{t}^{+}\), \(\widetilde{T}_{L}^{+}\) is unitary equivalent to \(\widetilde{T}_{R}^{+}\), and in particular
\[\sigma_{\mathrm{ess}}(\widetilde{T}_{R}^{+})=\sigma_{\mathrm{ess}}(\widetilde{ T}_{L}^{+})=[-2W,-2\delta]\cup[2\delta,2W].\]
Altogether, we proved that \(\sigma_{\mathrm{ess}}(\widetilde{T}_{R})=[-2W,-2\delta]\bigcup[2\delta,2W]\). The proof for the left part is similar, upon replacing \(T^{+}\) by \(T^{-}\).
In addition, using Lemma 3.2, we know that \(0\) is an eigenvalue of \(T\) of multiplicity \(1\). So \(0\) is an isolated eigenvalue, and we set
\[g:=\operatorname{dist}\left(0,\sigma(T)\setminus\{0\}\right)\quad>0,\quad \text{and}\quad\eta:=\min\left\{\frac{g}{8},\frac{\tau}{2},\frac{\delta}{2} \right\}>0.\]
By standard perturbation theory, for all \(\mathbf{h}\) with \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\eta\) (hence \(\|H\|_{\mathrm{op}}\leq 2\eta\) by Lemma 3.1), the spectrum of \(T+H\) is composed of an isolated eigenvalue \(\lambda_{0}(T+H)\) of multiplicity \(1\), with \(|\lambda_{0}(T+H)|\leq 2\eta\leq g/4\) corresponding to the perturbation of the \(0\) eigenvalue of \(T\), and the rest of the spectrum, at distance at least \(g-2\eta>3g/4\) from \(0\).
Since \(\|\mathbf{h}\|_{\ell^{\infty}}<\tau/2\) and \(\|\mathbf{h}\|_{\ell^{\infty}}<\delta/2\), the vector \(\mathbf{t}+\mathbf{h}\) satisfies \(\mathbf{t}+\mathbf{h}\geq\tau/2>0\) and \((t_{n}+h_{n})\in(t_{n}-\delta/2,t_{n}+\delta/2)\). In particular, it satisfies the assumption of Lemma 3.2 (heteroclinic case) with \(\kappa=\frac{W-\delta/2}{W+\delta/2}<1\). So \(\lambda_{0}(T+H)=0\): the eigenvalue \(0\) is unperturbed by the addition of \(H\).
We consider the positively oriented contour \(\mathscr{C}\) defined in (8). We deduce from the previous discussion that, for all \(\mathbf{h}\) with \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\eta\), we have
\[(T+H)_{-}=-(T+H)\mathds{1}(T+H<0)=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \frac{z}{z-(T+H)}\mathrm{d}z,\]
where all operators appearing are uniformly bounded by some constant \(C\geq 0\) independent of \(z\in\mathscr{C}\). We also remark that we have
\[\mathds{1}(T+H<0)=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{\mathrm{d}z }{z-(T+H)},\quad\text{and}\quad\mathds{1}(T+H\leq 0)=\mathds{1}(T+H<0)+Z,\]
where \(Z=|\psi\rangle\langle\psi|\) is the rank-1 projector onto the normalized zero-mode \(\psi\in\mathrm{Ker}(T+H)\), see Remark 3.3.
### Taylor expansion of the map \(\mathcal{F}_{\mathbf{t}}\)
In this section, we study the energy \(\mathcal{F}_{\mathbf{t}}\) in (7), and prove Lemma 2.2. Recall that \(\mathcal{F}_{\mathbf{t}}\) is defined by
\[\mathcal{F}_{\mathbf{t}}(\mathbf{h}):=\frac{\mu}{2}\sum_{n\in\mathbb{Z}}(h_{n }+2t_{n}-2)h_{n}-2\mathrm{Tr}((T+H)_{-}-T_{-}).\]
In what follows, \(\mathbf{t}\) is a homoclinic or heteroclinic configuration with \(\mathbf{t}\geq\tau\) for some \(\tau>0\). We introduce the constant \(\eta>0\) and the contour \(\mathscr{C}\) as in the previous section.
First, we claim that for all \(\mathbf{h}\) with \(\|\mathbf{h}\|_{\ell^{1}}\leq\eta\) (we now use the \(\ell^{1}\) norm), the map \(\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is well-defined and finite. Since \(\|\mathbf{h}\|_{\ell^{\infty}}\leq\|\mathbf{h}\|_{\ell^{1}}\leq\eta\), \(\mathbf{h}\) satisfies the conditions of the previous section. For the first part of the energy, we write that
\[\sum_{n\in\mathbb{Z}}h_{n}^{2}=\|\mathbf{h}\|_{\ell^{2}}^{2}\leq\|\mathbf{h}\| _{\ell^{1}}^{2},\quad\text{and}\quad\left|\sum_{n\in\mathbb{Z}}(2t_{n}-2)h_{n} \right|\leq(2\|\mathbf{t}\|_{\ell^{\infty}}+2)\,\|\mathbf{h}\|_{\ell^{1}},\]
so the first part is continuous from \(\ell^{1}\) to \(\mathbb{R}\). For the second part, we use the Cauchy residual formula, and get that
\[(T+H)_{-}-T_{-}=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\left(\frac{z}{z-( T+H)}-\frac{z}{z-T}\right)\mathrm{d}z=\frac{-1}{2\mathrm{i}\pi}\oint_{\mathscr{C}} \left(\frac{1}{z-(T+H)}H\frac{1}{z-T}\right)z\mathrm{d}z,\]
where we used the resolvent formula in the last equality. In particular, for all \(\mathbf{h}\in\ell^{1}\) with \(\|\mathbf{h}\|_{\ell^{1}}\leq\eta\) and all \(z\in\mathscr{C}\), we have, using Lemma 3.1,
\[\left\|\frac{1}{z-(T+H)}H\frac{1}{z-T}\right\|_{\mathfrak{S}_{1}}\leq\left\| \frac{1}{z-(T+H)}\right\|_{\mathrm{op}}\|H\|_{\mathfrak{S}_{1}}\left\|\frac{1} {z-T}\right\|_{\mathrm{op}}\leq 2C^{2}\|\mathbf{h}\|_{\ell^{1}}. \tag{22}\]
Integrating \(z\) in the compact \(\mathscr{C}\) eventually shows that \(\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is well-defined and continuous around \(\mathbf{0}\) for the \(\ell^{1}\) norm.
We can push the resolvent formula, and write that
\[\frac{1}{z-(T+H)}-\frac{1}{z-T}=\sum_{n=1}^{\infty}\frac{1}{z-T}\left(H\frac{1} {z-T}\right)^{n},\]
where the sum on the right is absolutely convergent in \(\mathcal{B}\) whenever
\[\sup_{z\in\mathscr{C}}\left\|\frac{1}{z-T}H\right\|_{\mathrm{op}}<1,\]
which happens whenever \(\|\mathbf{h}\|_{\ell^{\infty}}\) is small enough, according to Lemma 3.1. Actually, it is also absolutely convergent in \(\mathfrak{S}_{1}\) whenever \(\|\mathbf{h}\|_{\ell^{1}}\) is small enough. We deduce directly that \(\mathbf{h}\mapsto\mathcal{F}_{\mathbf{t}}(\mathbf{h})\) is analytic on a \(\ell^{1}\) neighborhood of \(\mathbf{0}\).
Let us compute the differential and hessian of this map. We write
\[\mathcal{F}_{\mathbf{t}}(\mathbf{h})=L_{\mathbf{t}}(\mathbf{h})+\frac{1}{2}H_ {\mathbf{t}}(\mathbf{h},\mathbf{h})+R_{\mathbf{t}}(\mathbf{h}),\]
with the linear form (differential) \(L_{\mathbf{t}}\) on \(\ell^{1}(\mathbb{Z})\), defined by
\[L_{\mathbf{t}}(\mathbf{h}):=\mu\sum_{n\in\mathbb{Z}}(t_{n}-2)h_{n}+2\mathrm{Tr }\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H\frac{1}{z-T} z\mathrm{d}z\right),\]
the bilinear form (hessian) \(H_{\mathbf{t}}\) on \(\ell^{1}(\mathbb{Z})\times\ell^{1}(\mathbb{Z})\), defined by
\[H_{\mathbf{t}}(\mathbf{h},\mathbf{k}):=\mu\sum_{n\in\mathbb{Z}}h_{n}k_{n}+4 \mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H \frac{1}{z-T}K\frac{1}{z-T}z\mathrm{d}z\right),\]
and the rest
\[R_{\mathbf{t}}(\mathbf{h}):=\frac{2}{\mathrm{Tr}}\left(2\mathrm{i}\pi\oint_{ \mathscr{C}}\left[\frac{1}{z-T}H\right]^{3}\frac{1}{z-T}z\mathrm{d}z\right).\]
Reasoning as in (22), we see that \(|R_{\mathbf{t}}(\mathbf{h})|\leq C\|\mathbf{h}\|_{\ell^{3}}^{3}\leq C\| \mathbf{h}\|_{\ell^{1}}^{3}\). Similarly, we have
\[|H_{\mathbf{t}}(\mathbf{h},\mathbf{k})|\leq C\|H\|_{\mathfrak{S}_{2}}\|K\|_{ \mathfrak{S}_{2}}\leq C^{\prime}\|\mathbf{h}\|_{\ell^{2}}\|\mathbf{k}\|_{\ell^ {2}},\]
so the \(H_{\mathbf{t}}\) bilinear form can be extended continuously on \(\ell^{2}(\mathbb{Z})\).
To end the proof of Lemma 2.2, it remains to simplify the expressions of \(L_{\mathbf{t}}\) and \(H_{\mathbf{t}}\). We use the following result.
**Lemma 3.5**.: _We have_
\[\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H \frac{1}{z-T}z\mathrm{d}z\right)=\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi} \oint_{\mathscr{C}}\frac{1}{z-T}H\mathrm{d}z\right)=\mathrm{Tr}(\Gamma_{ \mathbf{t}}H) \tag{23}\]
_and_
\[4\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H \frac{1}{z-T}K\frac{1}{z-T}z\mathrm{d}z\right)=2\mathrm{Tr}\left(\frac{1}{2 \mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T}H\frac{1}{z-T}K\mathrm{d}z\right). \tag{24}\]
Proof.: First, writing that \(z=(z-T)+T\), we get
\[\frac{1}{2\mathrm{i}\pi}\oint\frac{z\mathrm{d}z}{(z-T)^{2}}=\frac{1}{2\mathrm{ i}\pi}\oint\frac{\mathrm{d}z}{z-T}+\frac{T}{2\mathrm{i}\pi}\oint\frac{ \mathrm{d}z}{(z-T)^{2}},\]
and the second term vanishes by the Cauchy residual formula. We recognize the spectral projector
\[\Gamma_{\mathbf{t}}:=\mathds{1}(T<0)=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C }}\frac{\mathrm{d}z}{z-T},\]
in the first term. This and the cyclicity of the trace gives (23). We now differentiate this equality with respect to \(T\), in the direction \(K\). We get
\[\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\mathrm{Tr}\left(\frac {1}{z-T}K\frac{1}{z-T}H\frac{1}{z-T}+\frac{1}{z-T}H\frac{1}{z-T}K\frac{1}{z-T }\right)z\mathrm{d}z\] \[\quad=\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\mathrm{Tr}\left( \frac{1}{z-T}K\frac{1}{z-T}H\right)\mathrm{d}z.\]
Using again the cyclicity of the trace gives (24).
## 4. Proofs of the Lemmas
In this section, we provide the proofs of the Lemmas appearing in Section 2.4.
### Proof of Lemma 2.5
We first prove Lemma 2.5, which compares \(T\) and \(\widetilde{T}^{+}_{\alpha,s}\). The fact that \(\Theta_{\alpha,s}T^{+}=\widetilde{T}^{+}_{\alpha,s}\Theta_{\alpha,s}\) is a simple computation. Taking the adjoint gives the second equality of (16).
Let us now prove that \((z-\widetilde{T}_{\alpha,s})\) is invertible for \(\alpha\) small enough. The operator \(T-T^{+}_{\alpha,s}\) satisfies
\[\big{(}T-T^{+}_{\alpha,s}\big{)}_{n,n+1}=\begin{cases}t^{+}_{n}(1-\mathrm{e}^{ -\alpha})&\text{ if }n<s\\ 0&\text{ if }n\geq s\end{cases},\quad\big{(}T-T^{+}_{\alpha,s}\big{)}_{n+1,n}= \begin{cases}t^{+}_{n}(1-\mathrm{e}^{\alpha})&\text{ if }n<s\\ 0&\text{ if }n\geq s\end{cases},\]
and \(\big{(}T-T^{+}_{\alpha,s}\big{)}_{i,j}=0\) if \(|i-j|\neq 1\). Reasoning as in Lemma 3.1, we deduce that
\[\|T-T^{+}_{\alpha,s}\|_{\mathrm{op}}\leq 2\max_{n\in\mathbb{Z}}|t^{+}_{n}| \cdot\max\{|1-\mathrm{e}^{-\alpha}|,|1-\mathrm{e}^{\alpha}|\}=2(W+\delta)( \mathrm{e}^{\alpha}-1). \tag{25}\]
This bound is independent of \(s\in\mathbb{Z}\), and goes to \(0\) as \(\alpha\to 0\). Since \((z-T^{+})\) is invertible for all \(z\in\mathscr{C}\), we deduce that for \(\alpha\) small enough, \((z-\widetilde{T}^{+}_{\alpha,s})\) is invertible with bounded inverse, and satisfies \(\|(z-\widetilde{T}^{+}_{\alpha,s})^{-1}\|\leq C\) for a constant \(C\) independent of \(z\in\mathscr{C}\).
Finally, from the equality \(\Theta_{\alpha,s}T^{+}=\widetilde{T}^{+}_{\alpha,s}\Theta_{\alpha,s}\), we get \(\Theta_{\alpha,s}(z-T^{+})=(z-\widetilde{T}^{+}_{\alpha,s})\Theta_{\alpha,s}\), which gives, as wanted
\[\Theta_{\alpha,s}\frac{1}{z-T^{+}}=\frac{1}{z-\widetilde{T}^{+}_{\alpha,s}} \Theta_{\alpha,s},\quad\text{and}\quad\frac{1}{z-T^{+}}\Theta_{\alpha,s}= \Theta_{\alpha,s}\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}.\]
### Proof of Lemma 2.6
We now prove Lemma 2.6. The first and most important step is to prove the following Proposition, whose proof is postponed to Section 4.4
**Proposition 4.1**.: _For the dimerized configuration \(\mathbf{t}^{+}\), the hessian \(H_{\mathbf{t}^{+}}\) is bounded on \(\ell^{2}(\mathbb{Z})\times\ell^{2}(\mathbb{Z})\) and coercive._
Using this result, we can prove that \(\mathscr{L}\) is a symmetric bounded invertible operator. Recall that \(\mathscr{L}\) is defined in (12) by
\[(\mathscr{L}\mathbf{u})_{n}=\mu u_{n}+4\left(\frac{1}{2\mathrm{i}\pi}\oint_{ \mathscr{C}}\left(\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}\right)\mathrm{d}z\right) _{n,n+1}.\]
As we already noticed in (13), we have
\[\langle\mathbf{v},\mathscr{L}\mathbf{w}\rangle_{\ell^{2}}=\langle\mathscr{L} \mathbf{v},\mathbf{w}\rangle_{\ell^{2}}=H_{\mathbf{t}^{+}}(\mathbf{v},\mathbf{ w}).\]
This equality is first valid for \(\mathbf{v},\mathbf{w}\) compactly supported, but can be extended for \(\mathbf{v},\mathbf{w}\in\ell^{2}(\mathbb{Z})\) by continuity of \(H_{\mathbf{t}^{+}}\). This already proves that \(\mathscr{L}\) is a symmetric bounded operator on \(\ell^{2}(\mathbb{Z})\). In addition, the coercivity of \(H_{\mathbf{t}^{+}}\) shows that \(\mathscr{L}\) is invertible with bounded inverse (Lax-Milgram theorem).
We now focus on the map \(\widetilde{\mathscr{L}}_{\alpha,s}\) defined in (18). We claim that \(\|\widetilde{\mathscr{L}}_{\alpha,s}-\mathscr{L}\|_{\mathrm{op}}\) goes to \(0\) as \(\alpha\to 0\). This will eventually prove that \(\widetilde{\mathscr{L}}_{\alpha,s}\) is also invertible with bounded inverse.
We have, for \(\mathbf{v},\mathbf{w}\in\ell^{2}(\mathbb{Z})\),
\[\langle\mathbf{v},(\mathscr{L}-\widetilde{\mathscr{L}}_{\alpha,s})\mathbf{u} \rangle_{\ell^{2}}=2\mathrm{Tr}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C} }\left[\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}V-\frac{1}{z-\widetilde{T}^{+}_{ \alpha,s}}U\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}V\right]\mathrm{d}z \right).\]
We have
\[\frac{1}{z-T^{+}}U\frac{1}{z-T^{+}}-\frac{1}{z-\widetilde{T}^{+}_{ \alpha,s}}U\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}=\] \[\quad=\left(\frac{1}{z-T^{+}}-\frac{1}{z-\widetilde{T}^{+}_{ \alpha,s}}\right)U\frac{1}{z-T^{+}}+\frac{1}{z-\widetilde{T}^{+}_{\alpha,s}}U \left(\frac{1}{z-T^{+}}-\frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}\right)\] \[\quad=\frac{1}{z-T^{+}}(\widetilde{T}^{+}_{\alpha,s}-T^{+})\frac {1}{z-\widetilde{T}^{+}_{\alpha,s}}U\frac{1}{z-T^{+}}+\frac{1}{z-\widetilde{T}^ {+}_{\alpha,s}}U\frac{1}{z-T^{+}}((\widetilde{T}^{+}_{\alpha,s})^{*}-T^{+}) \frac{1}{z-(\widetilde{T}^{+}_{\alpha,s})^{*}}.\]
We then use estimates of the form (\(R\) stands for resolvent)
\[\mathrm{Tr}(R_{1}(T-\widetilde{T})R_{2}UR_{3}V)\leq\left\|R_{1}(T- \widetilde{T})R_{2}UR_{3}V\right\|_{\mathfrak{S}^{1}}\leq\|R_{1}\|_{\mathrm{op}} \|R_{2}\|_{\mathrm{op}}\|R_{3}\|_{\mathrm{op}}\|T-\widetilde{T}\|_{\mathrm{op}} \|U\|_{\mathfrak{S}^{2}}\|V\|_{\mathfrak{S}^{2}}.\]
We deduce that there is \(C\geq 0\) so that, for \(\alpha\) small enough,
\[\left|\langle\mathbf{v},(\mathscr{L}-\widetilde{\mathscr{L}}_{ \alpha,s})\mathbf{u}\rangle_{\ell^{2}}\right|\leq C\|\widetilde{T}_{\alpha,s}^ {+}-T^{+}\|_{\mathrm{op}}\|U\|_{\mathfrak{S}^{2}}\|V\|_{\mathfrak{S}^{2}}\leq 2 C\|\widetilde{T}_{\alpha,s}^{+}-T^{+}\|_{\mathrm{op}}\|\mathbf{u}\|_{ \ell^{2}}\|\mathbf{v}\|_{\ell^{2}},\]
where we used Lemma 3.1 in the last inequality. We proved in Lemma 25 that \(\|\widetilde{T}_{\alpha,s}^{+}-T^{+}\|_{\mathrm{op}}\to 0\) as \(\alpha\to 0\). Together with the fact that \(\mathscr{L}\) is invertible, we deduce that for there is \(\alpha^{*}>0\) and \(C\geq 0\) so that, for all \(0\leq\alpha<\alpha^{*}\) and all \(s\in\mathbb{Z}\), the operator \(\widetilde{\mathcal{L}}_{\alpha,s}\) is invertible with \(\|(\widetilde{\mathcal{L}}_{\alpha,s})^{-1}\|_{\mathrm{op}}\leq C\). This concludes the proof of Lemma 2.6
### Proof of Lemma 2.7
Finally, we focus on the map \(\widetilde{Q}_{\alpha,s,U}\) defined in (19). First, using that \(\sum_{n}(A)_{n,n+1}^{2}\leq\|A\|_{\mathfrak{S}^{2}}^{2}\) and estimates of the form
\[\|R_{1}(\Theta V)R_{2}(W\Theta)R_{3}\|_{\mathfrak{S}^{2}}\leq\|R_{1}\|_{ \mathrm{op}}\|R_{2}\|_{\mathrm{op}}\|R_{3}\|_{\mathrm{op}}\|\Theta V\|_{ \mathfrak{S}_{4}}\|W\Theta\|_{\mathfrak{S}_{4}},\]
we get
\[\left\|\widetilde{Q}_{\alpha,s,U}(\mathbf{v},\mathbf{w})\right\| _{\ell^{2}(\mathbb{Z})}^{2}\leq C\|\Theta_{\alpha,s}V\|_{\mathfrak{S}^{4}}\|W \Theta_{\alpha,s}\|_{\mathfrak{S}^{4}}.\]
It remains to bound \(\|\Theta_{\alpha,s}U\|_{\mathfrak{S}_{4}}\) by \(\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{4}}\). To do so, we follow the steps of Lemma 3.1, and prove that for all \(1\leq p<\infty\) and all \(\mathbf{u}\in\ell^{p}(\mathbb{Z})\), we have \(\Theta_{\alpha,s}U\) in \(\mathfrak{S}_{p}\) (in \(\mathcal{B}\) if \(p=\infty\)), and
\[\|\Theta_{\alpha,s}U\|_{\mathfrak{S}_{p}}\leq C_{p}\|\theta_{ \alpha,s}\mathbf{u}\|_{\ell^{p}} \tag{26}\]
for a constant \(C_{p}\) independent of \(\mathbf{u}\) (and \(\|\Theta_{\alpha,s}U\|_{\mathrm{op}}\leq C_{\infty}\|\theta_{\alpha,s} \mathbf{u}\|_{\ell^{\infty}}\) for \(p=\infty\)). We use below the fact that
\[\theta_{\alpha,s}(n)\leq\theta_{\alpha,s}(n+1)\leq\mathrm{e}^{ \alpha}\theta_{\alpha,s}(n). \tag{27}\]
First, for \(p=\infty\), we have, for \(\psi\in\ell^{2}(\mathbb{Z})\),
\[\|\Theta_{\alpha,s}U\psi\|_{\ell^{2}}^{2} =\sum_{n\in\mathbb{Z}}\theta_{\alpha,s}^{2}(n)|u_{n-1}\psi_{n-1}+ u_{n}\psi_{n+1}|^{2}\leq 2\sum_{n\in\mathbb{Z}}\theta_{\alpha,s}^{2}(n)|u_{n-1}|^{2}| \psi_{n-1}|^{2}+\theta_{\alpha,s}^{2}(n)|u_{n}|^{2}|\psi_{n+1}|^{2}\] \[\leq 2\mathrm{e}^{\alpha}\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{ \infty}}^{2}\|\psi\|_{\ell^{2}}^{2}+2\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{ \infty}}^{2}\|\psi\|_{\ell^{2}}^{2}.\]
We used that \(|a+b|^{2}\leq 2|a|^{2}+2|b|^{2}\) for the first inequality, and (27) for the second. This proves the bound
\[\|\Theta_{\alpha,s}U\|_{\mathrm{op}}^{2}\leq(2\mathrm{e}^{\alpha}+2)\left\| \theta_{\alpha,s}\mathbf{u}\right\|_{\ell^{\infty}}^{2}.\]
In the case \(p=1\), we have by duality
\[\|\Theta_{\alpha,s}U\|_{\mathfrak{S}_{1}} =\sup_{K\in\mathfrak{S}\atop\|K\|_{\mathrm{op}}=1}|\mathrm{Tr}( \Theta_{\alpha,s}UK)|=\sup_{K\in\mathfrak{B}\atop\|K\|_{\mathrm{op}}=1}|\sum_{ n\in\mathbb{Z}}u_{n}\theta_{\alpha,s}(n)K_{n+1,n}+u_{n}\theta_{\alpha,s}(n+1)K_{n,n+1}|\] \[\leq\sum_{n\in\mathbb{Z}}|u_{n}\theta_{\alpha,s}(n)|+\sum_{n\in \mathbb{Z}}|u_{n}\theta_{\alpha,s}(n+1)|\leq\|\theta_{\alpha,s}\mathbf{u}\|_{ \ell^{1}}+\mathrm{e}^{\alpha}\|\theta_{\alpha,s}\mathbf{u}\|_{\ell^{1}}.\]
Here, we used that both \(|K_{n,n+1}|\) and \(|K_{n+1,n}|\) are smaller than \(1\), and (27) for the last inequality.
We conclude that (26) holds for all \(1\leq p\leq\infty\) using Riesz-Thorin interpolation.
### Coercivity of the Hessian at the dimerized configuration
In this section, we prove Proposition 4.1. Recall that
\[H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})=\mu\|\mathbf{h}\|^{2}+2\mathrm{Tr} \left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{1}{z-T^{+}}H\frac{1}{z-T^ {+}}H\mathrm{d}z\right)\]
We already proved that \(H_{\mathbf{t}^{+}}\) is a bounded quadratic form on \(\ell^{2}(\mathbb{Z})\). We now prove that, for the dimerized configuration \(\mathbf{t}^{+}\), the Hessian \(H_{\mathbf{t}^{+}}\) is a coercive bilinear map on \(\ell^{2}(\mathbb{Z})\), namely that there is \(C>0\) so that, for all \(\mathbf{h}\in\ell^{2}(\mathbb{Z})\), we have
\[H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})\geq C\|\mathbf{h}\|_{\ell^{2}}^{2}.\]
By density, it is enough to prove the result for all compactly supported sequences \(\mathbf{h}\). Assume that \(\mathbf{h}\) is such a sequence, so that \(h_{n}=0\) for all \(|n|\geq S\). First, we claim that there is \(C>0\) so that, for all \(L\) large enough, we have
\[H^{(L)}_{\mathbf{t}_{L}^{+}}(\mathbf{h},\mathbf{h})\geq C\|\mathbf{h}\|_{\ell^{2 }}^{2}, \tag{28}\]
where \(H^{(L)}_{\mathbf{t}_{L}^{+}}(\mathbf{h},\mathbf{h})\) is the hessian of the SSH model for the closed \(L=2N\) chain, defined by
\[H^{(L)}_{\mathbf{t}_{L}^{+}}(\mathbf{h},\mathbf{h})=\mu\|\mathbf{h}\|_{\ell^{2 }}^{2}+2\mathrm{Tr}_{L}\left(\frac{1}{2\mathrm{i}\pi}\oint_{\mathscr{C}}\frac{ 1}{z-T_{L}^{+}}H\frac{1}{z-T_{L}^{+}}H\mathrm{d}z\right).\]
Here, \(\mathrm{Tr}_{L}\) is the trace for \(L\times L\) hermitian matrices, \(\mathbf{t}_{L}^{+}\) is the dimerized ground state of the closed \(L\)-chain (\(L\) even), of the dimerized form (see [6])
\[\mathbf{t}_{L}^{+}=W_{L}+(-1)^{n}\delta_{L}, \tag{29}\]
and \(T_{L}^{+}\) is the associated \(L\times L\) hermitian matrix. It was proved in [3, 4] that \(W_{L}\to W\) and \(\delta_{L}\to\delta\) as \(L\to\infty\). Actually, the bound (28) was more of less proved in [3], with a constant \(C\) independent of \(L\) for \(L\) large enough. We provide here another proof.
To prove (28), as in [6], we use the convexity of the function \(f:[0,1]\to\mathbb{R}\) defined by
\[[0,1]\ni x\mapsto-\sqrt{x}-\frac{1}{8}x^{2}.\]
As a consequence, the map \(A\mapsto\mathrm{Tr}f(A)\) is convex on the set of hermitian matrices with spectrum in \([0,1]\). This implies that, with \(A=\frac{1}{\|T\|_{\mathrm{op}}}T^{2}\),
\[-\mathrm{Tr}(\sqrt{T^{2}})\geq-\mathrm{Tr}(\sqrt{\langle T^{2}\rangle})+\frac {1}{8}\frac{1}{\|T\|_{\mathrm{op}}^{3}}\mathrm{Tr}[T^{4}-\langle T^{2}\rangle ^{2}],\]
where \(\langle A\rangle\) is the average of \(A\) over all translations, namely
\[\langle A\rangle=\frac{1}{L}\sum_{k=0}^{L-1}\Theta_{1}^{k}A\Theta_{1}^{-k}, \qquad\Theta_{1}=\begin{pmatrix}0&1&0&\cdots&0\\ 0&0&1&0&\cdots\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&0&1\\ 1&0&\cdots&0&0\end{pmatrix}.\]
We deduce that
\[\mathcal{E}^{(L)}(\mathbf{t})\geq\frac{\mu}{2}\sum_{n=1}^{L}(t_{n}-1)^{2}- \mathrm{Tr}_{L}(\sqrt{\langle T^{2}\rangle})+\frac{1}{8}\frac{1}{\|T\|^{3}} \mathrm{Tr}_{L}[T^{4}-\langle T^{2}\rangle^{2}]. \tag{30}\]
Since the operator \(T_{L}^{\pm}\) always corresponds to a 2-periodic configuration, it holds
\[\Theta_{1}\left(T_{L}^{\pm}\right)^{2}\Theta_{1}^{*}=\left(T_{L}^{\pm}\right)^ {2}\quad\text{and, in particular}\quad\left(T_{L}^{\pm}\right)^{2}=\left\langle \left(T_{L}^{\pm}\right)^{2}\right\rangle.\]
We deduce that there is equality in (30) for \(\mathbf{t}_{L}^{+}\). We also deduce from (30) that, for all \(\mathbf{t}\), we have
\[\mathcal{E}^{(L)}(\mathbf{t})-\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+})\geq\frac{ 1}{8}\frac{1}{\|T\|_{\mathrm{op}}^{3}}\mathrm{Tr}[T^{4}-\langle T^{2}\rangle ^{2}].\]
We apply this inequality for \(\mathbf{t}=\mathbf{t}_{L}^{+}+s\mathbf{h}\) and get that
\[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{(L)}(\mathbf{t }_{L}^{+})\geq\frac{1}{8}\frac{1}{\|T_{L}^{+}+sH\|_{\mathrm{op}}^{3}}\mathrm{Tr }\left[(T_{L}^{+}+sH)^{4}-\langle(T_{L}^{+}+sH)^{2}\rangle^{2}\right]\]
For the denominator, we use the fact that, for \(s\) small enough, we have \(\|T_{L}^{+}+sH\|_{\mathrm{op}}\leq 2\|T_{L}^{+}\|_{\mathrm{op}}\).
For the numerator, expanding the expression and using that \((T_{L}^{+})^{2}=\langle(T_{L}^{+})^{2}\rangle\), so that \(\mathrm{Tr}_{L}((T_{L}^{+})^{3}H)=\langle(T_{L}^{+})^{2}\rangle\).
\(\operatorname{Tr}_{L}\left(\langle(T_{L}^{+})^{2}\rangle\langle T_{L}^{+}H\rangle\right)\) and \(\operatorname{Tr}_{L}(\langle(T_{L}^{+})^{2}\rangle\langle H^{2}\rangle)= \operatorname{Tr}_{L}((T_{L}^{+})^{2}H^{2})\) (so that the orders \(O(1)\) and \(O(s)\) vanish), we obtain that
\[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{( L)}(\mathbf{t}_{L}^{+}) \geq\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\operatorname{op}}^{ 3}}\text{Tr}_{L}\Big{[}\left(T_{L}^{+}H+HT_{L}^{+}\right)^{2}-\langle T_{L}^{ +}H+HT_{L}^{+}\rangle^{2}\Big{]}+o(s^{2})\] \[=\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\operatorname{op}}^{3}} \left(\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^ {2}-\left\|\langle T_{L}^{+}H+HT_{L}^{+}\rangle\right\|_{\operatorname{ \mathfrak{S}}_{2}}^{2}\right)+o(s^{2})\]
The previous computation is valid for all \(\mathbf{h}\). We now use the fact that \(\mathbf{h}\) is compactly supported in \([-S,S]\), and that \(L\gg S\). This allows to prove that the last term is small. More specifically, we have the following.
**Lemma 4.2**.: _For all \(S\in\mathbb{N}\) all \(L\gg S\), all \(\mathbf{t}\in\mathbb{C}^{L}\) and all \(\mathbf{h}\in\mathbb{C}^{L}\) compactly supported in \([-S,S]\), we have_
\[\left\|\langle TH\rangle\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}\leq \frac{6S}{L}\|\mathbf{t}\|_{\ell^{\infty}}^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}.\]
Proof.: Set \(A=TH\), so that
\[A_{n,n}=t_{n}h_{n}+h_{n-1}t_{n-1},\quad A_{n,n+2}=t_{n+1}h_{n},\quad A_{n,n-2}= t_{n-2}h_{n-1}.\]
The matrix \(\langle A\rangle\) is of the form
\[\langle A\rangle_{n,n}=2a_{0},\quad\langle A\rangle_{n,n+2}=a_{1},\quad\langle A \rangle_{n,n-2}=a_{-1},\qquad\text{with}\quad a_{m}:=\frac{1}{L}\sum_{k=0}^{L- 1}t_{k+m}h_{k}.\]
Using that \(h\) is compactly supported and Cauchy-Schwarz we get
\[|a_{m}|^{2}=\frac{1}{L^{2}}\left(\sum_{k=-S}^{S}t_{k+m}h_{k}\right)^{2}\leq \frac{1}{L^{2}}\|\mathbf{t}\|_{\ell^{\infty}}^{2}\left(\sum_{k=-S}^{S}h_{k} \right)^{2}\leq\frac{1}{L^{2}}\|\mathbf{t}\|_{\ell^{\infty}}^{2}\|\mathbf{h} \|_{\ell^{2}}^{2}S.\]
We obtain
\[\|A\|_{\operatorname{\mathfrak{S}}^{2}}^{2}=\sum_{n,m}\langle A\rangle_{n,m}^ {2}=L(2a_{0})^{2}+La_{1}^{2}+La_{-1}^{2}\leq 6L\frac{1}{L^{2}}\|\mathbf{t}\|_{ \ell^{\infty}}^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}S.\]
This proves that
\[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{(L)}(\mathbf{t }_{L}^{+})=\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\operatorname{op}}^{3}} \left(\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2} -\frac{12S}{L}\|\mathbf{t}_{L}^{+}\|_{\operatorname{\mathfrak{L}}_{\ell^{ \infty}}^{2}}^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}\right)+o(s^{2}).\]
Finally, we bound from below the remaining \(\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}\). A computation shows that the matrix \(A:=TH+HT\) satisfies
\[A_{n,n}=2(t_{n}h_{n}+t_{n-1}h_{n-1}),\quad A_{n,n+2}=t_{n}h_{n+1}+h_{n}t_{n+1}, \quad A_{n,n-2}=t_{n-1}h_{n-2}+h_{n-1}t_{n-2},\]
and \(A_{i,j}=0\) otherwise. Squaring all terms and summing gives
\[\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}= \sum_{n}4(t_{n}h_{n}+t_{n-1}h_{n-1})^{2}+(t_{n}h_{n+1}+h_{n}t_{n+1})^{2}+(t_{n- 1}h_{n-2}+h_{n-1}t_{n-2})^{2}.\]
Expanding, relabelling all sums, and using that \(\mathbf{t}_{L}^{+}\) is dimerized, of the form (29), we obtain
\[\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\operatorname{\mathfrak{S}}_{2}}^{2}=2 \sum_{n\in\mathbb{Z}}\left\langle\left(\begin{matrix}h_{n}\\ h_{n+1}\end{matrix}\right),Q_{n}\left(\begin{matrix}h_{n}\\ h_{n+1}\end{matrix}\right)\right\rangle,\quad\text{with}\quad Q_{n}=\begin{pmatrix}2t_{n} ^{2}+t_{n+1}^{2}&3t_{n}t_{n+1}\\ 3t_{n}t_{n+1}&t_{n}^{2}+2t_{n+1}^{2}\end{pmatrix}.\]
We have
\[\operatorname{Tr}(Q_{n})=3(t_{n}^{2}+t_{n+1}^{2})=6(W_{L}^{2}+\delta_{L}^{2})\]
and
\[\det Q_{n}=(2t_{n}^{2}+t_{n+1}^{2})(t_{n}^{2}+2t_{n+1}^{2})-9t_{n}^{2}t_{n+1}^{2 }=2t_{n}^{4}+2t_{n+1}^{4}-4t_{n}^{2}t_{n+1}^{2}=2(t_{n}^{2}-t_{n+1}^{2})^{2}=32 \,W_{L}^{2}\delta_{L}^{2}.\]
Since \(\delta_{L}\to\delta>0\) and \(W_{L}\to W\) for \(L\) large enough, there is a constant \(C\geq 0\) such that \(Q_{n}\geq C>0\) for a constant \(C\) independent of \(n\) and \(L\) large enough. So
\[\left\|T_{L}^{+}H+HT_{L}^{+}\right\|_{\mathfrak{S}_{2}}^{2}\geq 2C\|\mathbf{h} \|_{\ell^{2}}^{2}.\]
Altogether, we proved that for \(L\) large enough, we have
\[\mathcal{E}^{(L)}(\mathbf{t}_{L}^{+}+s\mathbf{h})-\mathcal{E}^{(L )}(\mathbf{t}_{L}^{+}) \geq\frac{s^{2}}{16}\frac{1}{\|T_{L}^{+}\|_{\mathrm{op}}^{3}} \left(2C-\frac{12S}{L}\|\mathbf{t}_{L}^{+}\|_{\ell^{\infty}}^{2}\right)\| \mathbf{h}\|_{\ell^{2}}^{2}+o(s^{2})\] \[\geq\widetilde{C}s^{2}\|\mathbf{h}\|_{\ell^{2}}^{2}+o(s^{2}),\]
where \(\widetilde{C}\) is independent of \(L\), for \(L\) large enough (\(L\geq L_{0}\), where \(L_{0}\) depends on the support \(S\) of \(\mathbf{h}\)). This proves the lower bound (28) for \(H_{\mathbf{t}_{L}^{+}}^{(L)}\).
To conclude the proof, we note that
\[\left|\mathrm{Tr}_{L}\left(\frac{1}{z-T^{+}}H\frac{1}{z-T^{+}}H\right)- \mathrm{Tr}_{L}\left(\frac{1}{z-T_{L}^{+}}H\frac{1}{z-T_{L}^{+}}H\right)\right| \leq C\|H\|_{\mathfrak{S}_{2}}^{2}\|T^{+}-T_{L}^{+}\|_{\mathrm{op,L}}.\]
Since \(W_{L}\to W\) and \(\delta_{L}\to L\), we have \(\|T^{+}-T_{L}^{+}\|_{\mathrm{op,L}}\to 0\) as \(L=2N\) (even) goes to infinity. So for \(L\) large enough, we have
\[\left|H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})-H_{\mathbf{t}_{L}^{+}}^{(L)}( \mathbf{h},\mathbf{h})\right|\leq\frac{C}{2}\|\mathbf{h}\|_{\ell^{2}}^{2},\]
where \(C\) is the bound in (28). This proves \(H_{\mathbf{t}^{+}}(\mathbf{h},\mathbf{h})\geq\frac{C}{2}\|\mathbf{h}\|_{\ell^ {2}}^{2}\), where the constant \(C\) is independent of \(\mathbf{h}\). We proved the bound for \(\mathbf{h}\) compactly supported, but by density, it can be extended for all \(\mathbf{h}\in\ell^{2}(\mathbb{Z})\), hence the coercivity of \(H_{\mathbf{t}^{+}}\).
|
2306.06073 | Feature Selection on Sentinel-2 Multi-spectral Imagery for Efficient
Tree Cover Estimation | This paper proposes a multi-spectral random forest classifier with suitable
feature selection and masking for tree cover estimation in urban areas. The key
feature of the proposed classifier is filtering out the built-up region using
spectral indices followed by random forest classification on the remaining mask
with carefully selected features. Using Sentinel-2 satellite imagery, we
evaluate the performance of the proposed technique on a specified area
(approximately 82 acres) of Lahore University of Management Sciences (LUMS) and
demonstrate that our method outperforms a conventional random forest classifier
as well as state-of-the-art methods such as European Space Agency (ESA)
WorldCover 10m 2020 product as well as a DeepLabv3 deep learning architecture. | Usman Nazir, Momin Uppal, Muhammad Tahir, Zubair Khalid | 2023-05-31T20:27:10Z | http://arxiv.org/abs/2306.06073v1 | # Feature Selection on Sentinel-2 Multi-Spectral Imagery for Efficient Tree Cover Estimation
###### Abstract
This paper proposes a multi-spectral random forest classifier with suitable feature selection and masking for tree cover estimation in urban areas. The key feature of the proposed classifier is filtering out the built-up region using spectral indices followed by random forest classification on the remaining mask with carefully selected features. Using Sentinel-2 satellite imagery, we evaluate the performance of the proposed technique on a specified area (approximately 82 acres) of Lahore University of Management Sciences (LUMS) and demonstrate that our method outperforms a conventional random forest classifier as well as state-of-the-art methods such as European Space Agency (ESA) WorldCover \(10\)m \(2020\) product as well as a DeepLabv3 deep learning architecture.
Usman Nazir, Momin Uppal, Muhammad Tahir, and Zubair Khalid+Department of Electrical Engineering, Syed Babar Ali School of Science and Engineering
Lahore University of Management Sciences (LUMS), Lahore, Pakistan
{usman.nazir, momin.uppal, tahir, zubair.khalid}@lums.edu.pk Random Forest Classifier, Spectral Indices, Sentinel-2 Satellite, European Space Agency (ESA) WorldCover, DeepLabv3
Footnote †: We acknowledge the support of the Higher Education Commission of Pakistan under grant GCF-521.
## 1 Introduction
The presence of easily accessible multispectral satellite imagery has expanded the range of potential applications across diverse fields. An important example is automated detection of trees and green spaces that are significant contributors to ecosystem services such as air purification and carbon sequestration. Recent studies include [1] and [2] for global monitoring of environment and forest cover using Sentinel-2 imagery. A Copernicus Sentinel-2B satellite, launched in 2017 provides 13 bands with spatial resolution from \(10\) m to \(60\) m. The high spatial and temporal resolution of data from this satellite is specifically designed for vegetation monitoring. For tree cover estimation, a broad range of methodologies have been presented in the literature, e.g., [1, 3, 4, 5]. The authors in [3] proposed a data fusion method of different spatial resolution satellite imagery for forest-type mapping. Forest cover change is mapped in [4] using spatial resolution of \(25\) m to \(30\) m. A spatio-temporal study on forest cover change using GIS and remote sensing techniques in Ethiopia valley is presented in [5]. In [1], a simple tree cover (referred to as 'forest cover') approach using three different land cover (LC) products is employed in Finland. Clearly, most of these approaches focus on forest mapping - a gap exists in urban tree cover estimation in developing countries with low resolution imagery.
In this paper, We propose a multi-spectral classifier (that uses a mixture of spectral bands _and_ indices) for tree cover estimation in urban areas. The key aspects of the proposed classifier include a masking stage for filtering out built-up areas, followed by a random forest classifier operating on appropriately selected features. For performance evaluation, we manually annotate \(3768\) trees in Lahore, Pakistan1. We demonstrate that on account of suitable feature selection and masking mechanism, our proposed approach applied to low resolution imagery achieves a higher accuracy level compared to that obtained by the European Space Agency (ESA) WorldCover product [6] as well as a more computationally demanding deep learning architecture DeepLabv3 [7] applied on high-resolution imagery.
Footnote 1: This dataset is being made publicly available at [https://city.lums.edu.pk/products/](https://city.lums.edu.pk/products/)
The subsequent sections of this paper are structured as follows: Section \(2\) delves into a comprehensive analysis of the methodology, while Section \(3\) showcases the evaluation results comparing our proposed methodology with state-of-the-art models. Finally, Section 4 concludes the paper.
## 2 Methodology
The proposed methodology, illustrated in Fig. 1, consists of four stages. These include 1) Pre-processing, 2) Feature selection, 3) Masking, and finally 4) Random Forest Classification. The details of each stage are provided in the text below.
### Pre-processing
We divide the pre-processing of data into multiple steps. Initially, the images from a multi-spectral satellite containing
less than \(10\%\) cloud cover for the region of interest (LUMS) are passed through a cloud masking operation that removes cloud cover from these images. Next a median of these images is taken for each month. Finally multiple images are stacked together to generate a single combined image of the region of interest.
### Feature selection
For classification, we included eight bands of Sentinel-2 imagery as the feature set. These include B2 (Blue), B3 (Green), B4 (Red), B7 (Red Edge 3), B8 (NIR), B8A (Red Edge 4), B11 (SWIR 1) and B12 (SWIR 2). In addition, We also chose six spectral indices in our feature set. These include the Normalized Difference Vegetation Index (NDVI) [8], Enhanced Vegetation Index (EVI) [9], Normalized Difference Built-up Index (NDBI) [10], Normalized Difference Moisture Index (NDMI) [11], Leaf Area Index (LAI) [12] and Soil Adjusted Vegetation Index (SAVI) [13]. In general, regions with tree cover typically exhibit high vegetation indices (EVI, NDVI), NDMI, LAI, and SAVI, while showing notably low values for NDBI. Some background about these indicies is given below.
**NDVI**: This index [8] describes the difference between visible and near-infrared reflectance of vegetation cover and can be used to estimate the density of green on an area of land. This is computed from the the NIR and the Red bands measurements as follows
\[\text{NDVI}=\frac{\text{NIR}-\text{Red}}{\text{NIR}+\text{Red}} \tag{1}\]
**EVI and LAI:** EVI [9] is similar to NDVI and can be used to quantify greenness of vegetation. However, EVI corrects for some atmospheric conditions and canopy background noise and is more sensitive in areas with dense vegetation. It is computed as
\[\text{EVI}=2.5\times\frac{\text{NIR}-\text{Red}}{\text{NIR}+6\times\text{ Red}-7.5\times\text{Blue}+1} \tag{2}\]
On the other hand, LAI [12] is used to estimate crop growth and yield through the following empirical formula
\[\text{LAI}=3.618\times\text{EVI}-0.118 \tag{3}\]
**SAVI**: This index [13] attempts to minimize soil brightness influences using a soil-brightness correction factor. This is often used in arid regions where vegetative cover is low, and it outputs values between \(-1\) and \(1\) through the following relationship
\[\text{SAVI}=\frac{0.5\times(\text{NIR}-\text{Red})}{\text{NIR}+\text{Red}+0.5} \tag{4}\]
**NDWI**: This is a satellite-derived index [14] from the NIR and the SWIR channels. The NDWI is used to monitor changes related to water content in water bodies as they strongly absorb light in visible to infrared electromagnetic spectrum.
\[\text{NDWI}=\frac{\text{NIR}-\text{SWIR1}}{\text{NIR}+\text{SWIR1}} \tag{5}\]
**NDBI:** This index [15] uses the NIR and SWIR bands to emphasize manufactured built-up areas. It aims to mitigate
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**ROI** & **Model** & **Pred. area (acres)** & **Masking** & **Spectral indices** & **Pixel-wise Test Accuracy (\%)** & Kappa Score \\ \hline LUMS & RF-spectral-bands & 29.5 & No & No & 0.93 & 0.81 \\ \hline LUMS & RF-spectral-indices & 28 & No & Yes & 0.95 & 0.88 \\ \hline LUMS & **Proposed** & 25 & Yes & Yes & **0.99** & **0.92** \\ \hline \hline LUMS & ESA WorldCover Product [6] & 16 & - & - & 0.74 & - \\ LUMS & DeepLabv3 [7] & 28 & No & No & 0.80 & - \\ \hline \end{tabular}
\end{table}
Table 1: Tree cover area is predicted using Sentinel-2 imagery and RF classifier by employing various feature selection techniques. The Lahore University of Management Sciences (LUMS) study region spans \(82.165\) acres of area with \(23\) acres area of actual tree cover.
Figure 1: Proposed methodology for tree cover estimation with feature selection of spectral bands and indices.
the effects of terrain illumination differences as well as atmospheric effects.
\[\text{NDBI}=\frac{\text{SWIR}-\text{NIR2}}{\text{SWIR}+\text{NIR2}} \tag{6}\]
### Masking
Masking process involves the following two steps.
_Applying the Vegetation Index._ The EVI or NDVI values are calculated for each pixel in the satellite imagery. These values indicate the presence and density of vegetation. In this case, a threshold of 0.2 is set, implying that any pixel with an EVI or NDVI value equal to or below 0.2 is considered non-vegetated or sparsely vegetated. Such regions are likely to include built-up areas, as they have less vegetation cover.
_Utilizing the Built-up Index._ Simultaneously, the NDBI values are computed for each pixel. This index highlights the presence and extent of built-up areas. High positive NDBI values indicate the dominance of built-up surfaces, while low or negative values represent non-built-up or natural areas.
By combining the results of both the vegetation index and built-up index, the filtering process identifies and excludes pixels with low vegetation (pixels for which both EVI and NDVI are less than or equal to 0.2) _and_ high built-up signatures (pixels that have positive NDBI values).
### Random Forest (RF) Classification
The masking operation described above aims to retain only the non-built-up or natural regions for input to the classification module. For the purpose, we utilize an RF classifier which is an example of ensemble learning where each model is a decision tree. Ensemble learning creates a stronger model by aggregating the predictions of multiple weak models, such as decision trees in our case. To train the RF classifier, we need to have at least two classes.
We combine multiple sample points along with their corresponding class labels (representing trees or non-trees), divide the samples into an 80% training set and a 20% validation set, train a random forest classifier with the features described above, and then use the trained classifier to classify the input image. In the process of an RF model training, the user defines the number of features at each node in order to generate a tree. The classification of a new dataset is done by passing down each case of the dataset to each of the grown trees, then the forest chooses a class having the most votes of the trees for that case. More details on RF can be found in Breiman [16]. The main motivation behind choosing RF for this study is its ability to efficiently handle large and high dimensional datasets [17, 18].
## 3 Evaluation
The proposed methodology is applied to satellite imagery for the year 2021 and its performance compared to other benchmarks is shown in Table 1. As the results indicate, RF with all multi-spectral bands performs better than the ESA World-Cover product [6] and DeepLabv3 [7]. RF with spectral indices achieve higher accuracy as compared to RF with only spectral bands. Finally, the proposed model accomplishes higher pixel-wise accuracy and Kappa score as compared to all other models (see Table 2).
Results indicating the effect of feature selection with the proposed methodology are provided in Table 2. Clearly, as the feature selection set increases, the pixel-wise accuracy and Kappa score increases. It implies pixel-wise accuracy is directly proportional to our feature selection. We choose the Kappa coefficient as a performance metric because it represents the extent to which classes on the ground are correct representations of the classes on the map.
Finally, qualitative results are illustrated in Fig. 2. The ground truth tree cover of LUMS study region is \(23\) acres while the predicted area using the proposed model is \(25\) acres. It is important to note that our proposed model operating on low resolution imagery with suitable feature selection and masking operations performs better than a DeepLabv3 [7]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**(RF + Masking +) Features set** & **Pixel-wise Test Accuracy (\%)** & Kappa Score \\ \hline Eight multispectral bands + NDVI & 0.96 & 0.76 \\ \hline Eight multispectral bands + NDVI + NDWI + NDBI + EVI & 0.97 & 0.80 \\ \hline Eight multispectral bands \& All spectral indices & **0.99** & **0.92** \\ \hline \end{tabular}
\end{table}
Table 2: Pixel-wise accuracy and Kappa score of proposed model with different feature set on LUMS study region.
Figure 2: Qualitative results using feature selection on Sentinel-2 multi-spectral imagery for efficient tree cover estimation.
deep learning architecture trained on high resolution imagery despite the computational complexity of the former being extremely low compared to the latter.
## 4 Conclusion
The paper proposes a methodology for estimating urban tree cover using RF classification with appropriately selected multispectral features and masking. The proposed methodology exhibits superior performance compared to classical RF classifiers that solely utilize spectral bands, as well as surpassing state-of-the-art models such as the European Space Agency (ESA) WorldCover \(10\)m \(2020\) product [6] and DeepLabv3 [7] deep learning architecture trained on high resolution imagery. Our future work aims to apply the proposed technique to estimate tree cover across entire cities in Pakistan.
|
2309.10094 | Data Formulator: AI-powered Concept-driven Visualization Authoring | With most modern visualization tools, authors need to transform their data
into tidy formats to create visualizations they want. Because this requires
experience with programming or separate data processing tools, data
transformation remains a barrier in visualization authoring. To address this
challenge, we present a new visualization paradigm, concept binding, that
separates high-level visualization intents and low-level data transformation
steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an
interactive visualization authoring tool. With Data Formulator, authors first
define data concepts they plan to visualize using natural languages or
examples, and then bind them to visual channels. Data Formulator then
dispatches its AI-agent to automatically transform the input data to surface
these concepts and generate desired visualizations. When presenting the results
(transformed table and output visualizations) from the AI agent, Data
Formulator provides feedback to help authors inspect and understand them. A
user study with 10 participants shows that participants could learn and use
Data Formulator to create visualizations that involve challenging data
transformations, and presents interesting future research directions. | Chenglong Wang, John Thompson, Bongshin Lee | 2023-09-18T19:06:29Z | http://arxiv.org/abs/2309.10094v2 | # Data Formulator: AI-powered Concept-driven Visualization Authoring
###### Abstract
With most modern visualization tools, authors need to transform their data into tidy formats to create visualizations they want. Because this requires experience with programming or separate data processing tools, data transformation remains a barrier in visualization authoring. To address this challenge, we present a new visualization paradigm, _concept binding_, that separates high-level visualization intents and low-level data transformation steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an interactive visualization authoring tool. With Data Formulator, authors first define _data concepts_ they plan to visualize using natural languages or examples, and then bind them to visual channels. Data Formulator then dispatches its AI-agent to automatically transform the input data to surface these concepts and generate desired visualizations. When presenting the results (transformed table and output visualizations) from the AI agent, Data Formulator provides feedback to help authors inspect and understand them. A user study with 10 participants shows that participants could learn and use Data Formulator to create visualizations that involve challenging data transformations, and presents interesting future research directions.
AI, visualization authoring, data transformation, programming by example, natural language, large language model
## 1 Introduction
Most modern visualization authoring tools (e.g., Charticulator [41], Data Illustrator [29], Lyra [44]) and libraries (e.g., ggplot2 [55], Vega-Lite [46]) expect tidy data [56], where every variable to be visualized is a column and each observation is a row. When the input data is in the tidy format, authors simply need to bind data columns to visual channels (e.g., Date \(\mapsto\)\(x\)-axis, Temperature \(\mapsto\)\(y\)-axis, City \(\mapsto\) color in Fig. 1). Otherwise, they need to prepare the data, even if the original data is clean and contains all information needed [3]. Authors usually rely on data transformation libraries (e.g., tidyverse [57], pandas [35]) or separate interactive tools (e.g., Wrangler [19]) to transform data into the appropriate format. However, authors need either programming experience or tool expertise to transform data, and they have to withstand the overhead of switching between visualization and data transformation steps. The challenge of data transformation remains a barrier in visualization authoring.
To address the data transformation challenge, we explore a fundamentally different approach for visualization authoring, leveraging an AI agent. We separate the high-level visualization intent "_what to visualize_" from the low-level data transformation steps of "_how to format data to visualize_," and automate the latter to reduce the data transformation burden. Specifically, we support two key types of data transformations (and their combinations) needed for visualization authoring:
* **Reshaping**: A variable to be visualized is spread across multiple columns or one column includes multiple variables. For example, if authors want to create a different scatter plot from the table in Fig. 1 by mapping Seattle and Atlanta temperatures to \(x,y\)-axes (Fig. 2-2), they need to first "pivot" the table from long to wide format, because both variables of interest are stored in the Temperature column and are not readily available.
* **Derivation**: A variable needs to be extracted or derived from one or more existing columns. For example, if authors want to create a bar chart to show daily temperature differences between two cities (Fig. 2-2) and a histogram to count the number of days which city is warmer (Fig. 2-3), they need to derive the temperature difference and the name of the warmer city from the two cities' temperature columns, and map them to the \(y\)-axis and \(x\)-axis, respectively, and the city name to color channels of the corresponding charts. The derivation is also needed when the variable to be visualized requires analytical computation (e.g., aggregation, moving average, percentile) across multiple rows from a column in the table. For example, to plot a line chart to visualize the 7-day moving averages of Seattle temperatures (Fig. 2-3), the authors need to calculate the moving average using a window function and map it to \(y\)-axis with Date on \(x\)-axis.
In this paper, we introduce Data Formulator, an interactive visualization authoring tool that embodies a new paradigm, _concept binding_. To create a visualization with Data Formulator, authors provide their visualization intent by binding data concepts to visual channels. Upon loading of a data table, existing data columns are provided as known data concepts. When the required data concepts are not available to author a given chart, the authors can create the concepts: either using natural language prompts (for derivation) or by providing examples (for reshaping). Data Formulator handles these two cases differently, with different styles of input and feedback, and we provide a detailed description of how they are handled in Sec. 2. Once the necessary data concepts are available, the authors can select a chart type (e.g., scatter plot, histogram) and map data concepts to desired visual channels. If needed, Data Formulator dispatches the backend AI agent to infer necessary data transformations to instantiate these new concepts based on the input data and creates candidate visualizations. Because the authors' high-level specifications can be ambiguous and Data Formulator may generate multiple candidates, Data Formulator provides feedback to explain and compare the results. With this feedback, the authors can inspect, disambiguie, and refine the suggested visualizations. After that, they can reuse or create additional data concepts to continue their visualization authoring process.
We also report a chart reproduction study conducted with 10 participants to gather feedback on the new concept binding approach that
Fig. 1: A dataset of Seattle and Atlanta daily temperatures in 2020 (left) and a scatter plot that visualizes them by mapping Date to \(x\)-axis, Temperature to \(y\)-axis, and City to color (right).
employs an AI agent, and to evaluate the usability of Data Formulator. After an hour-long tutorial and practice session, most participants could create desired charts by creating data concepts--both with derivation and reshaping transformations. We conclude with a discussion on the lessons learned from the design and evaluation of Data Formulator, as well as important future research directions.
## 2 Illustrative Scenarios
In this section, we illustrate users' experiences to create visualizations in Figs. 1 and 2 using programs and Data Formulator from the initial input data in Fig. 1. We refer to this dataset as _at_ in this section.
### Experience with Programming
We first illustrate how an experienced data scientist, Eunice, uses programming to create the desired visualizations with pandas and Altair libraries in Python.
**Daily Temperature Trends.** Eunice starts with the scatter plot in Fig. 1. Because of is in the tidy format with Date, City, and Temperature available, Eunice needs no data transformation and writes a simple Altair program to create the plot:
\[\text{at.Chart(df).mark\_circle().encode(x-Date',y-Temperature',color-'Clity)}\]
This program calls the Altair library (at), selects the input dataset of the scatter plot function mark_circle, and maps columns to \(x,y\) and color channels. It renders the desired scatter plot in Fig. 1.
**Seattle vs. Atlanta Temperatures.** To make a more direct comparison of two cities' temperatures, Eunice wants to create a different scatter plot (Fig. 2-1) by mapping Seattle and Altair temperatures to \(x,y\)-axes. However, Seattle and Atlanta temperatures are not available as columns in _at_. She therefore needs to transform _at_ to surface them. Because of is in the "long" format, where temperatures of both cities are stored in one column Temperature, she needs to pivot the table to the "wide" format. Eunice switches to the data transformation step and uses the pivot function from the pandas library to reshape _at_ (Fig. 3). This program populates Seattle and Atlanta as new column names from the City column, and their corresponding Temperature values are moved to these new columns by Date. With _df2_, Eunice creates the desired visualization, which maps Seattle and Atlanta to \(x,y\)-axes of the scatter plot with the following program:
\[\text{atl.Chart(df2).mark\_circle().encode(x- Seattle',y-ASanta)}\]
**Temperature Differences.** Eunice wants to create two visualizations to show how much warmer is Atlanta compared to Seattle: a bar chart to visualize daily temperate differences (Fig. 2-1) and a histogram to show the number of days each city is warmer (Fig. 2-1). Again, because necessary fields Difference and Warmer are not in _df2_, Eunice needs to transform the data. This time, she writes a program to perform column-wise computation, which extends _df2_ with two new columns Warmer and Difference (Fig. 4). Eunice then creates the daily temperature differences chart by mapping Date and Difference to \(x,y\)-axes and the histogram by mapping Warmer to \(x\)-axis and the aggregation function, count(), to \(y\)-axis to calculate the number of entries.
\[\begin{array}{l}\textit{if extended 4f2 with new columns Difference' and Warmer'}\\ \text{df2[ Difference] = d2[ Seattle] - d2[ Atlanta] }\\ \text{df2[ Warmer] = d2[ Difference] }\text{apply}\\ \text{lambdx : Seattle' if $x>$ }\text{else (Atlanta' if $x<$ }\text{else ('Same'))}\\ \textit{if create the bar chart}\\ \text{atl.Chart(df2).mark\_bar().encode(x-Date',y-Difference',\\ \text{color-'Warmer'})}\\ \textit{if create the histogram}\\ \text{atl.Chart(df2).mark\_bar().encode(x-Warmer',y-'count(),\\ \text{color-'Warmer'})}\end{array}\]
**Remark.** In all cases, Eunice can specify visualizations using simple Altaiir programs by mapping data columns to visual channels. However, data transformation steps make the visualization process challenging. Eunice needs to choose the right type of transformation based on the input data and desired visualization (e.g., creating the scatter plot in Fig. 1 from dZ would require unpivot instead). Furthermore, Eunice needs knowledge about pandas to choose the right function and parameters per task (e.g., rolling will not fit if Eunice wants to calculate moving average for each city in dJ). Eunice's programming experience and data analysis expertise allowed her to successfully complete all tasks. But a less experienced data scientist, Megan, finds this process challenging. Megan decides to use Data Formulator to reduce the data transformation overhead.
### Experience with Data Formulator
Data Formulator (Fig. 5) has a similar interface as "shelf-configuration"-style visualization tools like Tableau or Power BI. But unlike these tools that support only mappings from input data columns to visual channels, Data Formulator enables authors to create and derive new data concepts and map them to visual channels to create visualizations _without requiring manual data transformation_.
**Daily Temperature Trends.** Once Megan loads the input data (Fig. 1), Data Formulator populates existing data columns (Date, City, and Temperature) as known _data concepts_ in the Concept Shelf. Because all three data concepts are already available, no data transformation is needed. Megan selects the visualization type "Scatter Plot" and maps these data concepts to \(x,y\) and color channels in Chart Builder through drag-and-drop interaction. Data Formulator then generates the desired scatter plot.
**Seattle vs. Atlanta Temperatures.** To create the second scatter plot (Fig. 4-1), Megan needs to map Seattle and Atlanta temperatures to \(x,y\)-axes of a scatter plot. Because Seattle and Atlanta temperatures are not available as concepts yet, Megan starts out by creating a new data concept Atlanta Temp (Fig. 6-1): she clicks the new button in the Concept Shelf, which opens a concept card that asks her to name the new concept and provide some examples values; Megan provides four Atlanta temperatures (45, 47, 56, 41) from the input data as examples and saves it. Similarly, Megan creates another new concept Seattle Temp. Because Data Formulator's current knowledge to them is limited to their names and example values, both concepts are listed as an unknown concept for now. (They will be resolved later when more information is provided.)
With these new concepts and the Scatter Plot selected, Megan maps new data concepts Seattle Temp and Atlanta Temp to \(x,y\)-axes (Fig. 6-1), and then clicks the FORMULATE button to let Data Formulator formulate the data and instantiate the chart. Based on the visualization spec, Data Formulator realizes that the two unknown concepts are related to each other but not yet certain how they relate to the input data. Thus, Data Formulator prompts Megan with an example table to complete: each row in the example table will be a data point in the desired scatter plot. Megan needs to provide at least two data points from the input data to guide Data Formulator on how to generate this transform (Fig. 6-1). Here, Megan provides the temperatures of Atlanta and Seattle on 01/01/2020 and 01/02/2020 from the table Fig. 1. When Megan submits the example, Data Formulator infers a program that can transform the input data to generate a new table with fields Atlanta Temp and Seattle Temp that subsumes the example table provided by Megan. Data Formulator generates the new table and renders the desired scatter plot (Fig. 6-1). Megan inspects the derived table and visualization and accepts them as correct.
**Temperature Differences.** To create a bar chart and a histogram to visualize temperature differences between the two cities, Megan needs two new concepts, Difference and Warner. This time, Megan notices that both concepts can be _derived_ from existing fields based on column-wise mappings, and thus she uses the "derive" function of Data Formulator (Fig. 7). Megan first clicks the "derive new concept" option on the existing concept Seattle Temp, which opens up a concept card that
Fig. 5: Data Formulator UI. After loading the input data, the authors interact with Data Formulator in four steps: (1) in the Concept Shelf, create (e.g., Seattle and Atlanta) or derive (e.g., Difference, Warner) new data concepts they plan to visualize, (2) encode data concepts to visual channels of a chart using Chart Builder and formulate the chart, (3) inspect the derived data automatically generated by Data Formulator, and (4) examine and save generated visualizations. Throughout the process, Data Formulator provides feedback to help authors understand generated data and visualizations.
lets her describe the transformation she wants using natural language. Megan selects Seattle Temp and Atlanta Temp as the "derived from" concepts, provides a name Difference for the new concept, and describes the transform using natural language, "Calculate Seattle attlanta temp diff." Megan then clicks the generate button and Data Formulator dispatches its backend AI agent to generate code. Data Formulator returns two code candidates and presents the first one in the concept card. Megan opens up the dialog to inspect both candidates and learns that because her description did not clearly specify whether she wants the difference or its absolute value, Data Formulator returns both options as candidates. After inspecting the example table and the transformation code provided by Data Formulator, Megan confirms the first candidate and saves the concept Difference. Similarly, Megan creates a concept, Warmer, from Seattle Temp and Atlanta Temp with the description "check which city is warmer, Atlanta, Seattle, or same." Data Formulator applies the data transformation on top of the derived table from the last task and displays the extended table in Data View (Fig. 5). Because both concepts are now ready to use, Megan maps them to Chart Builder to create the desired visualizations (Fig. 8).
**7-day Moving Average of Seattle's Temperature.** Last, Megan needs to create a line chart with 7-day moving average temperatures. Because the moving average can be derived from the Seattle Temp column, Megan again chooses to use the derive function. Megan starts with a brief description "calculate 7-day moving avg" and calls Data Formulator to generate the desired transformation. Upon inspection, Megan notices that the generated transformation is close but does not quite match her intent: the 7-day moving average starts from \(d-6\) to \(d\) for each day \(d\) as opposed to \(d-3\) to \(d+3\) (Fig. 9). Based on this observation, Megan changes the description into "calculate 7-day moving avg, starts with 3 days before, and ends with 3 days after" and re-runs Data Formulator. This time, Data Formulator generates the correct transformation and presents the extended data table in Fig. 5. Megan then maps Date and Seattle 7-day Moving avg to \(x,y\)-axes of a line chart.
**Remark.** With the help of Data Formulator, Megan creates visualizations without manually transforming data. Instead, Megan specifies the data concepts she wants to visualize by:
Fig. 8: Megan creates the bar chart using derived concepts, Difference and Warmer, as well as an original concept Date.
Fig. 6: Megan (1) creates new data concepts, Seattle Temp and Atlanta Temp, by providing examples and (2) maps them to \(x,y\)-axes of a scatter plot to specify the visualization intent. (3) Data Formulator asks Megan to provide a small example to illustrate how these two concepts are related, and Megan confirms the example. (4) Based on the example, Data Formulator generates the data transformation and creates the desired visualization.
Fig. 7: (1) Megan derives the new concept Difference from Atlanta Temp and Seattle Temp using natural language. Data Formulator generates two candidates and displays the first one in the concept card. (2) Megan opens the dialog to inspect both, confirms the first one, and saves the concept.
* building new concepts using examples (when the new concept is spread among multiple columns or multiple concepts are stored in the same column, e.g., Seattle Temp and Atlanta Temp are both stored in the Temperature column); and
* deriving new concepts using natural language (when the new concept can be computed from existing ones using column-wise operators, e.g., Difference from Seattle Temp and Atlanta Temp).
Megan then drags-and-drops data concepts to visual channels of a chart. In this process, for derived concepts, Data Formulator displays generated candidate code and example table to help Megan inspect and select the transformation; for concepts created by example, Data Formulator prompts Megan to elaborate their relations by completing an example table. Data Formulator then transforms the data and generates the desired visualizations. Data Formulator reduces Megan's visualization overhead by shifting the task of specifying data transformation into the task of inspecting generated data. Because Data Formulator's interaction model centers around data concepts, Megan does not need to directly work with table-level operators, such as pivot, map/reduce and partioning, which are challenging to master.
## 3 The Data Formulator Design
In this section, we describe our design principles, explain Data Formulator's interaction model, and how Data Formulator derives data concepts and formulates visualizations from the author's inputs.
### _Design Principles_
Data Formulator introduces _data concepts_, an abstraction of the columns needed for an author to specify their target visualization. To eliminate the author's burden to manually transform the data table before plotting, we designed Data Formulator based on the following guiding design principles.
**Treat design concepts as first-class objects.** The notion of data concepts is a generalization of table columns: it is a reference to columns both from a current table and from a future transformed table. They offer two benefits. First, concept-level transformations are easier to describe and understand than table-level operators. Table-level transformations require either advanced operators like pivot and unpivot, or high-order functions like map and window, while concept-level operators are first-order functions over primitive elements (e.g., arithmetic) or lists (e.g., percentile). This makes it easier for the author to communicate with the AI agent and verify the results. Second, we can build the interaction experience on top of existing designs people are already familiar with: data concepts resemble data columns existing shelf-configuration tools commonly use.
**Leverage benefits from multiple interaction approaches.** Data Formulator employs both natural language interaction (for deriving concepts) and programming-by-example approach (for building custom concepts). Natural language descriptions have a superior ability to translate high-level intent into executable code and large language models (LLMs) can reason about natural concepts (e.g., academic grades are A, B, C, D, and F; months are from January to December). However, it can be difficult for the author to provide proper descriptions if they do not understand notions like pivoting, and natural language descriptions can be imprecise and ambiguous. In contrast, while program synthesizers cannot reason about natural concepts, they are less ambiguous, and it is easier for the author to convey reshaping operations by demonstrating the output relation. By incorporating multiple approaches and feedback for different transformation types (derivation vs. reshaping), Data Formulator takes advantage of both, reducing the specification barrier and improving the likelihood for the AI agent to generate correct and interpretable codes.
**Ensure correct data transformation and promote trust.** While LLM and program synthesizers can automatically generate code to eliminate the author's manual data transformation burden, they can incorrectly generalize the author's specification. Therefore, it is crucial for the author to view and verify the results. Our design employs mechanisms to ensure such inspection by the author: (1) display multiple candidates for the author to review, if available, (2) display both the code (process) and the sample output values (results) to help the author understand the transformation, and (3) allow the author to edit the generated transformation code to correct or refine it.
**Improve the system expressiveness.** Data Formulator's expressiveness is defined by the combination of transformation function and visualization language. Data Formulator's visualization spec builds on top of Vega-Lite specifications. While Data Formulator's UI does not provide options to layer marks, the author can import their custom Vega-Lite specs of layered visualizations to achieve the same design. For data transformation, Data Formulator supports reshaping options from idyverses as described in Sec. 2, and it supports both column-wise derivation and analytical computation that can be generated by the LLM. Note that while our transformation language does not include aggregation, the author can achieve the same visualization by setting aggregation options on the desired axes (e.g., map Month to \(x\)-axis and avg(Seattle Temp) to \(y\)-axis to create a bar chart with average temperature). However, with the current design, the author cannot derive or reshape data that first require aggregation without re-importing the aggregated data.
### _Interaction Model_
Figure 10 shows Data Formulator's high-level interaction model. Data Formulator first loads data columns from the input table as original (and known) concepts (e.g., Date, City, and Temperature concepts in Fig. 5). The author uses the Concept Shelf to create new data concepts, if needed, in two ways (Sec. 3.3): (1) derive a concept from existing ones by interacting with an AI agent using natural language or (2) build a custom concept by providing example values. If the new concept is derived from known concepts, Data Formulator immediately extends the current data table and registers it as a known concept.
With necessary data concepts known, the author uses the Chart Builder to map data concepts to visual channels of a chart. If unknown custom concepts are used to specify a visualization, Data Formulator asks the author to provide an example relation among the encoded concepts to transform the input table by using a programming-by-example approach. With the necessary data formulations applied, Data Formulator generates a Vega-Lite spec and renders the visualization.
### _Creating New Data Concepts_
The author can **derive** a concept from one or more data concepts by interacting with Data Formulator's AI agent (Fig. 10-1). In addition to
Fig. 9: Megan derives the 7-day moving averages from Seattle Temp. After inspecting the results, she edits the description to be more precise.
a concept name, the author provides both a list of source concepts from which the new concept is derived and a natural language description of the transformation (Fig. 7-1). Data Formulator then generates a contextualized prompt that grounds the description in the context of source concepts. This prompt combines the author's description and the descriptions of input parameters for all source concepts (with example values sampled from their domains) as comments, and joins it with the function prefix to instruct the AI agent to complete a Typescript function (as opposed to generate non-code text or uncontrolled code snippets). Data Formulator prepares two types of prompts for each query to cover simple derivation (Example 1) and analytical computation (Example 2) because it does not know if analytical computation is needed beforehand.
**Example 1:** The prompt for "Calculate seattle atlanta temp diff" with source concepts Seattle Temp and Atlanta Temp (Fig. 7).
**Example 2:** The prompt for "calculate 7-day moving avg" with source concept Seattle Temp (Fig. 9). It provides index and seattleTemplist so that the function can access to other values of the seattleTemp when analytical computation is needed (e.g., calculate the moving average for current index, derive percentile of the seattleTemp among all values).
Data Formulator sends both prompts to LLM (we use Codex Davinci 2 [7]) to generate the transformation code (Fig. 10-2), asking for five candidate completions. When candidate programs are returned from LLM, Data Formulator filters out programs that are not executable or contain error outputs by executing them on sample values from source domains. Data Formulator then presents the programs along with their example execution results for the author to inspect (Fig. 7). Once confirmed, a new derived concept is created and shown in the Concept Shelf. If all source fields are known concepts, Data Formulator derives a new column by applying the transformation function to every tuple from the source columns and appends the column in the current table for the author to review (e.g., Fig. 5).
The author can also **build** a custom concept by providing its name and a set of example values that belong to its domain (Fig. 10-3). Custom concepts are designed to support data reshaping: the author creates custom concepts when (1) the concept is spread across multiple columns; the author wants to combine multiple columns in a wide table to create one new concept in a long table, (2) multiple concepts are stored in one column; they want to surface fields from a long table, and (3) multiple values for a concept are collapsed in a column as a list (e.g., the value for an "actors" column is a list of actors for each movie); the author wants to split the list into multiple rows (i.e., one actor per row). These custom concepts are _not_ known yet upon creation because Data Formulator needs additional information from the author to resolve their relation with the input data. As we will describe in the next section, the resolution is achieved by inferring the reshaping program based on the example relations provided by the user.
With data concepts (including newly crated ones) ready, the author is ready to interact with the Chart Builder to create visualizations.
### _Specifying and Formulating the Visualization_
Chart Builder employs a shelf-configuration interface: authors drag-and-drop data concepts to visual channels of the selected visualization to specify visual encoding. Based on the encoding, Data Formulator generates a Vega-Lite specification (e.g., Fig. 11) to render the visualization. Data Formulator adopts a chart-based specification: each chart type corresponds to a Vega-Lite template with placeholder encodings to be filled from the author specification. Data Formulator currently supports scatter plots (circle-based, bubble chart, ranged dot plots), bar charts (single-column, stacked, layered, grouped, histogram), line charts (with and without dots), heatmap, and custom charts (with all compatible visual channels).
When all fields used in the visual encoding are available, Data Formulator combines the Vega-Lite spec with the input data to render the visualization (e.g., Fig. 1). Otherwise, when some concepts are unknown (unresolved custom concepts or concepts derived from unknown ones), Data Formulator first interacts with the author and then calls the program synthesis engine to create the transformed table.
Once the author specifies the visual encoding, Data Formulator first checks if any unknown concepts are used. If so, it asks the author to illustrate the relation of unknown concepts with other concepts used in the visual encoding by filling out an example relation in a sample table (e.g., Fig. 10-3). Data Formulator needs such example relation to disambiguate the visualization intent because unknown concepts contain example values only from their own domains, missing information on how they will be related row-wise in the transformed table. For example, Data Formulator generates the example relation with Seattle Temp and Atlanta Temp fields as shown in Fig. 6-3 for the author to complete. To reduce the author's efforts, Data Formulator pre-fills two example values of Atlanta Temp based on its sample domain and asks the author to complete their corresponding Seattle Temp values (e.g., what's Seattle Temp when Atlanta Temp is 45). Each row in the example relation will be a row in the transformed data, which will then be mapped to a point in the scatter plot.
Once the author submits the example relation, Data Formulator calls the program synthesizer to solve the data reshaping problem (Fig. 10-3). Given an example relation \(E\), with input data \(T\), the program synthesizer solves the programming-by-example problem to find a reshaping program \(p\) such that \(E\subseteq p(T)\) (i.e., the transformed data should generalize the example \(E\)). The reshaping program \(p\) is defined by the grammar in Figure 12, where \(p\) is recursively defined over four core reshaping operators from the R idyverse library. We include only reshaping operators because other operators like unle and summarise are already supported by Data Formulator's ability to derive concepts from natural language. With this grammar, the program synthesizer performs an enumerative search in the program space for candidate programs. To speed up this combinatorial search process, we leverage abstract interpretation to prune the search space: the program synthesis engine returns candidate programs that satisfy the example relation to Chart Builder. Note that multiple candidates could be generated since the example relation is small and potentially ambiguous. In practice, unlike
Fig. 11: Vega-Lite specs for the scatter plots in Fig. 2-1 and Fig. 6.
Fig. 10: Data Formulator’s interaction model.
other programming-by-example tools, the small example relation is precise enough to constrain the program space that only the correct candidate is returned, because the program synthesizer only needs to solve the reshaping problem.
With generated reshaping programs, Chart Builder prepares the input data: it first generates a reshaped table from each reshaping program and then for every derived concept used in the encoding, it extends the reshaped table with a new column by applying the transformation function on every tuple in the table. This way, Data Formulator generates a new table with all necessary fields to instantiate the visualization.
Data Formulator presents the prepared table and candidate visualizations for the author to inspect (Fig. 5-7). When the author confirms and saves a desired visualization, the transformed data is used to resolve unknown concepts: these concepts are now available as known concepts to be used to create visualizations.
### Implementation
Data Formulator is built as a React web application in Typescript; its backend is a Python server that runs on a Dv2-series CPU with 3.5 GiB RAM on Azure. Data Formulator's backend queries the OpenAI Cocker API for concept derivation and runs the synthesis algorithm locally. Data Formulator's scalability to larger data relates to (1) the fronted's visualization rendering capability and (2) the backend's efficiency to execute data transformation scripts. To scale up Data Formulator for large datasets, we envision a sampling-based approach [31], where Data Formulator presents results on a representative subset of data to enhance interactivity and returns full results asynchronously.
## 4 Evaluation: Chart Reproduction Study
We conducted a chart reproduction study [42] to gather feedback on the new concept binding approach that employs an AI agent, and to evaluate the usability of Data Formulator.
### Study Design
**Participants.** We recruited 10 participants (3 female, 7 male) from a large technology company. All participants had experience creating (simple) charts and identified themselves as a person with normal or corrected-to-normal vision, without color vision deficiency. Six participants are data scientists, two are applied scientists, and the remaining two are data & applied scientists, and they are all located in the United States. Four participants are in their 30's, three are in 20's, and one participant is in each of the 40's, 50's, and 18-19 age group. They had varying levels of self-identified expertise in terms of chart authoring, computer programming, and experience with LLMs.
**Tasks and Datasets.** We prepared six chart reproduction tasks with two datasets (3 tasks for each dataset): daily COVID-19 cases from Jan 21, 2020 to Feb 28, 2023 (3 columns; 1,134 rows) for the first task set (Tasks 1-3) and daily temperatures in 2020 for Seattle and Atlanta (4 columns; 732 rows; Fig. 1) for the second set (Tasks 4-6). In both task sets, each subsequent task is built upon the previous one. One task (Task 4) required building two new concepts for reshaping and the other five tasks required the use of derived concepts. We also prepared three tutorial tasks, using students' exam scores dataset (5 columns; 1,000 rows): in addition to the scores for three subjects (math, reading, and writing), the data table included a student's and major. The first tutorial task was about creating a chart with known/available concepts, while the second and third tutorial tasks were for creating charts using derived concepts and unknown concepts, respectively. Finally, we produced two practice tasks (one for reshaping and another for derivation). For these, the exam scores dataset was transformed into a long format, including math and reading scores under the subject and score column, resulting in 4 columns and 2,000 rows.
**Setup and Procedure.** We conducted sessions remotely via the Microsoft Teams. Each session consisted of four segments: (1) a brief explanation of the study goals and procedure, (2) training with tutorial and practice, (3) chart reproduction tasks, and (4) debrief.
The training segment started with a quick introduction of Data Formulator's basic interactions using a simple task that does not require data transformation. Then, with their screen shared and recorded with audio, participants went through a tutorial and created three visualizations following step-by-step instructions provided in slides. They next created two visualizations on their own as practice. After an optional break, the participants performed six reproduction tasks using the two datasets mentioned above. Each task included a description (e.g., "Create a Scatter Plot to compare Atlanta Temperature against Seattle Temperature."), the labels for axes and color legend (if necessary), and an image of the target visualization. (Study materials are included in the supplemental material.) We encouraged the participants to think aloud, describing their strategies, whether any feature of Data Formulator works or makes sense, if the system behaves as they expect, etc. We recorded if the participants required a hint (and which hint) and how long it took for them to complete the task. The recorded completion time is not intended to indicate performance, as we wanted to gain insights about our approach using the think aloud method. Instead, we wanted to see if and how the participants completed, faltered, or recovered for each task, within a reasonable amount of time. The session ended with a debriefing after the participants filled out a questionnaire with 5 questions about their experience with Data Formulator. The entire session took about two hours to complete, while the training segment took about an hour. We compensated each participant with a 5100 Amazon Gift card.
### Results
After an hour-long tutorial and practice session, most participants could use Data Formulator to create different types of charts that involve advanced data transformations. Furthermore, they were generally positive about their experience with Data Formulator in chart authoring.
**Tasks Completion and Usability Issues.** Participants completed all tasks on average within 20 minutes, with a deviation of about four and a half minutes. Table I shows the average and standard deviation of task completion time in minutes, along with the total number of hints provided for each chart reproduction task (for all 10 participants). The participants spent most of their time (on average less than five minutes) on Task 6 because it was not trivial to inspect the code to generate 7-day moving average. For Tasks 5 and 6, we had to give one hint (to two different participants) to guide them to use a different type of concept (they needed to derive a concept but initially tried to build a concept). There were a few cases that we had to provide a hint to a single participant: how to select multiple sources for derivation (Task 4), what are the correct source concepts for derivation (Tasks 2 & 5), and the example values should be from the original table (Task 4). We had to provide the highest number of hints for Task 1. This was because when participants derived the year from the date value, its data type was set to number and the participants did not know or remember how to change its data type to string. (As detailed below, some participants tried to fix it by providing a different natural language prompt).
For derived concepts, once the participants identified the correct interaction approach and input fields, they are able to describe and refine the transformation in natural language to solve the tasks. We recorded all participants' prompts (see supplementary material). On average, participants made 1.62 prompt attempts per derived concept, and the length of those prompts averaged 7.28 words. The system generated an average of 1.94 candidates per prompt attempt.
Participants rated Data Formulator on five criteria using a 5-point Likert scale (5 being the most positive) as follows: easy to learn (\(M=3.90\), \(SD=0.88\)), easier than other tools to transform data (\(M=3.80\), \(SD=1.23\)), AI-agent's usefulness (\(M=4.4\), \(SD=0.70\)), helpful to
Figure 12: Reshaping operators supported by Data Formulator. \(T\) refers to input data, and \(c\) refers to column names.
verify generated data (_M_ = 4.1, _SD_ = 0.74), and the trustworthiness of generated data (_M_ = 4.7, _SD_ = 0.48).
Participants provide feedback to improve the user interface. Four participants expected a way to multi-select on concept cards and click "derive" for deriving a concept from multiple existing ones. The current method of clicking "derive" on one concept and then multi-selecting is not intuitive. Two other participants expected the AI to select or identify which concepts to derive from based on their prompts. A few participants expected to change data type using the prompt (e.g., "year as a string" when the year is extracted from date). Five participants wanted the derived examples table to show more values, or unique derived values. Reshaping data was at times a point of confusion: two participants found it difficult to understand how the AI formulated candidate datasets, while two others did not intuit or remember the task sequence to formulate data for unknown concepts. When required to reshape data, three participants entered plausible, but not exact values in the example table during the training: they misunderstood the rigid connection to the original dataset. To strengthen that connection participants recommended including additional columns (especially a column that is unique for a pivot transform) or to filter or highlight rows of the data table view that correspond to the values used in the example table. We also observed users' attempts to re-use a derived concept as a commutative function on other concepts: two participants tried to drag a derived concept and drop it on other concepts.
**Overall Reaction and Experience.** To understand participants' reaction to the new concept-drive approach employing an AI agent, we analyzed the debiter interview, during which participants stated something or confirmed an observation made by the experimenter. Using the transcription from the recorded sessions, one researcher applied an open coding method to surface all unique feedback, issues and ideas from the participants. He expanded the codes to generalize for semantically similar participant statements. While quantities of qualitative data does not provide a metric for importance, we counted how many participants mentioned each code, providing how frequently our participants had shared experiences or ideas with Data Formulator.
Overall, participants were positive about their experience with Data Formulator. All 10 participants said that natural language prompts work well for generating data transforms and eight mentioned that AI is a helpful tool for the study tasks. Participants more frequently praised the derived concept than the unknown concept method for transforming data. Specifically, when it comes to verifying candidate derived concepts: all except one participant commented that displaying code was helpful and seven found the example derived values table to be useful. While only half of the participants commented that pivoting with unknown concepts is easier than with other tools, only three affirmed the example data table being helpful.
Five participants mentioned that they were impressed by the power of the AI agent to generate data transforms. Five participants found having candidates (for both derived and formulated data) to be helpful because the candidates provided an opportunity to choose a correct answer, or at the least to select a promising direction to refine. Participants also explained that generating candidates increases trust in a collaborative experience. On the other hand, three participants mentioned they are reluctant to give much trust to the AI generative features of the tool.
## 5 Related Work
Data Formulator builds on top of prior research in visualization authoring tools, data transformation tools, and code generation techniques.
**Visualization Grammars and Tools.** The grammar of graphics [58] first introduces the representation of visualizations based on chart types and encodings of data columns to their visual channels. Many high-level grammars are designed to realize this idea. For example, ggplot2 [55] is a charting library in R based on visual encodings. Vega-Lite [46] and its Python-wrapper Altair [51] extend the traditional grammar of graphics design with rules for layered and multi-view displays, as well as interactions, and Animated Vega-Lite [65] further extends it to support animations. These grammars hide low-level implementation details and are concise. Therefore, they are generally preferred for the rapid creation of visualization in exploratory settings over toolkits and libraries like Protovis [4], Atlas [28], and D3 [5] that are designed for more expressive and novel visualization authoring. High-level grammars inspire interactive visualization tools like Tableau [49], Power BI, Lyra [44], Charculator [41], and Data Illustrator [29]. These tools adopt a shelf-configuration design: authors map data columns to visual encoding "shelves" often using the drag-and-drop interaction, and generate specifications in high-level grammars to render visualizations. These grammars and tools require that the input data is in a tidy format, where all variables to be visualized are columns of input data. Because this means authors often need to transform the data first to create any visualizations, Satyanarayan et al. recognized the automatic inferring or suggestions of appropriate transformations when necessary, as an important research problem [45].
To reduce authors' efforts, visualization by demonstration [43, 47, 64] and by example [54] tools are introduced. Lyra 2 [64] generates interaction rules after authors perform an interaction on the visualization. VbD [43] lets users demonstrate transformations between different types of visualizations to produce new specifications. Although these approaches reduce the chart specification efforts, they require tidy input data. Falk [54], on the other hand, addresses the data transformation challenge with a visualization-by-example design. Falk lets authors specify visualizations via low-level example mappings from data points to primitive chart elements. However, Falk does not support derivation types of transformation because of its underlying programming-by-example algorithm limitations; its requirement to focus on low-level elements also introduces a challenging paradigm shift for users who are more familiar with tools that focus on high-level specifications [46, 49].
Natural language interfaces [8, 21, 27, 30, 37] enhance users' ability to author and reason about visualizations. NCNet [30] uses a Seq-to-Seq model to translate chart description texts into Vega-Lite specs. VisQA [21] is a pipeline that leverages semantic parsing techniques [36] to provide atomic data-related answers based on its visualizations. NL4DV [33] and Advisor [27] generate visualizations based on user questions. To manage ambiguity in natural language inputs [48], DataTone [13] ranks solutions based on user preference history, and Pumice [25] introduces a multi-modal approach that leverages examples to refine the initial ambiguous specification. Data Formulator's concept derivation interface is based on natural language. Data Formulator benefits from large language models' expressiveness [7], and manages ambiguity by restricting the target function type to columns-to-column mapping functions (as opposed to arbitrary data transformation scripts). In the future, more powerful language models can be plugged into Data Formulator to improve code generation quality.
Data Formulator adopts the shelf-configuration approach like Tableau and Power BI, but it supports encoding from _data concepts_ to visual channels to address the data transformation burden. Because Data Formulator can automatically transform the input data based on the concepts used in the visualization specification, authors do not need to manually transform data. Furthermore, because Data Formulator's Chart Builder resembles tools like Power BI and Tableau, it lets the authors focus on high-level designs. Data Formulator's multi-modal interaction approach supports both derivation and reshaping tables. While Data Formulator currently focuses on standard visualization supported by Vega-Lite, its AI-powered concept-driven approach can also work with expressive and creative visualization design tools like StructGraph [50] and Data Illustrator [29] to automate data transformations.
**Data Transformation Tools.** Libraries and tools like tidyverse [57], pandas [35], Potter's Wheel [40], Wrangler [19], Tableau Prep, and
\begin{table}
\begin{tabular}{c|c|c|c} \hline Task & Average Time & Standard Deviation & Total Number of Hints \\ \hline Task 1 & 2:21 & 0:45 & 7 \\ Task 2 & 3:19 & 2:09 & 2 \\ Task 3 & 3:45 & 1:33 & 2 \\ \hline Task 4 & 2:43 & 1:33 & 2 \\ Task 5 & 2:22 & 1:55 & 3 \\ Task 6 & 4:29 & 1:39 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: The average and standard deviation of task time (in minutes) and the total number of hints provided for chart reproduction tasks.
Power Query are developed to support data transformation. They introduce operators to reshape, compute, and manipulate tabular data needed in data analysis. Automated data transformation tools, including programming-by-example tools [18, 38, 52] and initiative tools [15, 19, 20, 62], are developed to reduce authors' specification effort. Data Formulator tailors key transformation operators from the idlyverse library (reshaping and derivation) for visualization authoring. Because the desired data shape changes with visualization goals, even with these tools, authors still need the knowledge and effort to first identify the desired data shape, and then switch tools to transform the data. Data Formulator bridges visual encoding and data transformation with data concepts to reduce this overhead.
**Code Generation.** Code generation models [7, 9, 12] and program synthesis techniques [6, 14, 52, 63] enable users to complete tasks without programming by using easier specifications, including natural language, examples, and demonstrations. Code generation models like Codex [7], PaLM [9], and InCoder [12] are transformer-based causal language models (commonly referred to as LLMs) that complete texts from natural language prompts. These LLMs can generate expressive programs to solve competitive programming [16, 26], data science [22], and software engineering tasks [1] from high-level descriptions. Programming-by-example [54] and programming-by-demonstration [2, 39] tools can synthesize programs based on users' output examples or demonstrations that illustrate the computation process. Natural language approaches are highly expressive, but some tasks can be challenging to phrase. On the other hand, while programming-by-example techniques are precise, they are less expressive and do not scale to large programs as they require well-defined program spaces. Therefore, Data Formulator adopts a mixed-modality approach to solve the data transformation task. It leverages the Codex model [7] for concept derivations and the example-based synthesis algorithm [53] for reshaping, which takes advantage of both approaches to reduce authors' specification overhead.
Because code generation techniques generalize programs from incomplete user specifications, generated programs are inherently ambiguous, and thus require disambiguation to identify a correct solution among candidates. Prior work proposes techniques to visualize the search process [63], visualize code candidates [54, 61], and present distinguishing examples for authors to inspect [17]. Data Formulator provides feedback to the authors by presenting the generated code together with its execution results for them to inspect, select, and edit.
## 6 Discussion and Future Work
**Unified Interaction with Multiple Modalities.** Data Formulator employs two different modalities for authors to specify different types of data transformation: natural language for concept derivation and examples for table reshaping (Sec. 3). This design combines strengths of both modalities so that the authors can better communicate their intent with the AI agent, and the AI agent can provide precise solutions from a more expressive program space. However, choosing the right input modality when creating a new concept can be challenging for inexperienced authors. To address this challenge, we envision a stratified approach where the authors just initiate the interaction in natural language, and the AI agent will decide whether to ask the authors, for example relations for clarification or to directly generate derivation codes. This design will shift the effort of deciding which approach to start with from the authors to the AI agent, and "by-example" specification will become a followup interaction step to help the authors clarify their intent. We envision this mixed-initiative unified interaction will further reduce the authors' efforts in visualization authoring.
**Conversational Visualization Authoring with AI Agents.** Conversational AI agents [34] have the strength of leveraging the interaction contexts to better interpret user intent. They also provide opportunities for users to refine the intent when the task is complex or ambiguous. However, conversation with only natural language is often insufficient for visualization authoring because (1) it does not provide users with precise control over the authoring details (e.g., exploring different encoding options, changing design styles) and (2) the results can be challenging to inspect and verify without concrete artifacts (e.g., programs, transformed data). It would be useful to research how conversational AI can be integrated with Data Formulator's concept-driven approach to improve the overall visualization experiences. First, with a conversational AI agent, the authors can incrementally specify and refine their intent for tasks that are difficult to solve in one shot. Second, a conversational agent complements Data Formulator by helping the authors explore and configure chart options. Because Data Formulator focuses on data transformation, it does not expose many chart options (e.g., axis labeling, legend style, visual mark styles) in its interface. A conversational AI agent can help the authors access and control these options without overwhelming them with complex menus. For example, when the authors describe chart styles they would like to change, Data Formulator can apply the options directly or dynamically generate editing panels for them to control. We envision the effective combination of conversational AI experiences, and the Data Formulator approach will let the authors confidently specify richer designs with less effort.
**Concept-driven Visual Data Exploration.** Visual data exploration tools [59, 23, 24, 32, 60] help data scientists understand data and gain meaningful insights in their analysis process. These tools support a rich visual visualization space, yet still require datasets to be in the appropriate shape and schema. While Data Formulator is designed for visualization authoring, its concept-driven approach can be used in visual data exploration to expand the design space. Beyond the current concept-driven features of Data Formulator, the AI agent could be enhanced to recommend data concepts of interest based on the data context or author interaction history. Building on this idea, the tool could recommend charts based on all potentially relevant data concepts. This expansive leap could overcome one of the limitations of chart recommendation systems: by enabling the authors to view charts beyond their input data columns without additional user intervention.
**Study Limitations.** While our participants had varying levels of expertise in chart authoring, computer programming, and experience with LLMs, many of them had considerable knowledge about data transformation methodology and programming. It would be useful to investigate if and how people with limited expertise could learn and use Data Formulator. The main goal of Data Formulator was to reduce manual data transformation in visualization authoring efforts. As such, in our study, we focused on derivation and reshaping types of data transformations with simple datasets. While they are key types of transformation and our tasks covered multiple styles of derivations, the transformations we studied are by no means comprehensive. It would be valuable to evaluate the broader combinations and complexities of data transformations. Our study adopted a chart reproduction study [42], which is commonly used for evaluating chart authoring systems (e.g., [41, 29, 44]). Therefore, our study shares its inherent limitations: because we prepared datasets and tasks, and provided target visualizations as a reference, we do not know if and how people would use Data Formulator to create visualizations with their own data.
## 7 Conclusion
This paper introduces Data Formulator, a concept-driven visualization authoring tool that leverages an AI agent to address the data transformation challenge in visualization authoring. With Data Formulator, authors work with the AI agent to create new data concepts and then map data concepts to visual channels to specify visualizations. Data Formulator then automatically transforms the input data and instantiates the visualizations. Throughout the authoring process, Data Formulator provides feedback to the authors to inspect and refine the generated code to promote confidence. As discovered in the chart reproduction study, participants can learn and use Data Formulator to create visualizations that require advanced data transformations. In the future, the concept-driven visualization approach can potentially benefit the design of new visual data exploration tools and expressive visualization authoring tools to overcome the data transformation barrier.
## Appendix A Supplemental Material
We include three zip files in the supplemental material: (1) a 6-minute video that walks through user experiences of creating visualizations
about Seattle and Atlanta temperatures, described in Sec. 2 with Data Formulator, (2) a set of short videos that demonstrate additional Data Formulator scenarios, and (3) our user study materials including: study script, tutorials, study tasks, and prompts created by participants.
|
2303.17978 | How Can Mixed Reality Benefit From Physiologically-Adaptive Systems?
Challenges and Opportunities for Human Factors Applications | Mixed Reality (MR) allows users to interact with digital objects in a
physical environment, but several limitations have hampered widespread
adoption. Physiologically adaptive systems detecting user's states can drive
interaction and address these limitations. Here, we highlight potential
usability and interaction limitations in MR and how physiologically adaptive
systems can benefit MR experiences and applications. We specifically address
potential applications for human factors and operational settings such as
healthcare, education, and entertainment. We further discuss benefits and
applications in light of ethical and privacy concerns. The use of
physiologically adaptive systems in MR has the potential to revolutionize
human-computer interactions and provide users with a more personalized and
engaging experience. | Francesco Chiossi, Sven Mayer | 2023-03-31T11:25:10Z | http://arxiv.org/abs/2303.17978v1 | How Can Mixed Reality Benefit From Physiologically-Adaptive Systems? Challenges and Opportunities for Human Factors Applications
###### Abstract.
Mixed Reality (MR) allows users to interact with digital objects in a physical environment, but several limitations have hampered widespread adoption. Physiologically adaptive systems detecting user's states can drive interaction and address these limitations. Here, we highlight potential usability and interaction limitations in MR and how physiologically adaptive systems can benefit MR experiences and applications. We specifically address potential applications for human factors and operational settings such as healthcare, education, and entertainment. We further discuss benefits and applications in light of ethical and privacy concerns. The use of physiologically adaptive systems in MR has the potential to revolutionize human-computer interactions and provide users with a more personalized and engaging experience.
Mixed Reality, Adaptive Systems, Physiological Computing, Human Factors, Unsality +
Footnote †: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
## 1. Introduction
Mixed reality (MR) systems encompass a broad spectrum that spans from physical reality to virtual reality (VR), including instances that involve overlaying virtual content over physical one, i.e., Augmented Reality (AR), as well as those that use physical content to enhance the realism of virtual environments, i.e. Augmented Virtuality (AV) (Steinteiner, 2017). These instances are typically predefined for a seamless physical and virtual content blend.
MR enables users to interact with digital objects in a physical environment, resulting in immersive and engaging experiences. However, several limitations have hampered the widespread adoption of MR technology (Steiner, 2017; Delfosse et al., 2017). In recent years, researchers have begun to investigate the use of physiologically adaptive systems to address these limitations by developing systems that can respond in real-time to the user's physiological state (Beng et al., 2018).
Physiologically adaptive systems belong to a group of adaptive systems that employ physiological signals to generate personalized and captivating experiences. They are based on user's physiological signals as a form of input, such as peripheral measures, e.g., electrocardiogram (Krause et al., 2017) or electrodermal activity (Beng et al., 2018), and central physiological measures, such as electroencephalography (EEG) (Beng et al., 2018; Beng et al., 2018), and Functional near-infrared spectroscopy (INIRS) (Beng et al., 2018) produce real-time feedback and responses based on the user's physiological state. Physiologically-adaptive systems are based on classic control theory (Krause et al., 2017). This theory involves three main steps: physiological data acquisition and processing, transformation into a system response, and shaping the expected psychophysiological response from the user. These so-called "Biocybernetic control loops" (Beng et al., 2018; Beng et al., 2018) employ a negative control to detect deviations from the optimal state and
prompt changes in the system to encourage a desirable user's state. This process is crucial in creating a responsive and personalized experience for the user.
Considering that physical and virtual reality are the two extremes of the MR continuum, this provides a favourable setting for developing adaptive systems. Adaptive systems can tailor the MR experience to the user's needs and goals by leveraging this continuum, assisting them in achieving optimal performance (Dong et al., 2019), immersion (Song et al., 2019), and engagement (Han et al., 2020).
This paper aims to investigate the potential applications of physiologically adaptive systems in MR and discuss their advantages and disadvantages. We will specifically look at the benefits of physiologically adaptive systems in addressing the limitations of MR technology and discuss their potential applications.
First, we review the definition of MR and its various forms. We will also look at the current limitations of MR technology and the issues that must be addressed to improve its usability and effectiveness. We will then define physiologically adaptive systems and discuss their characteristics and potential benefits.
Second, we discuss potential applications of physiologically adaptive systems in human factors and applied MR settings. For example, healthcare professionals can use such systems to create more engaging and effective patient therapies by providing real-time feedback and support based on the patient's physiological state. By adapting to the student's cognitive and physiological state, these systems can be used in education to create more immersive and engaging learning experiences. By adapting to the player's physiological state and creating more personalized and engaging experiences, these systems can be used in entertainment to create more engaging and immersive games and simulations.
Finally, we highlight challenges for physiologically adaptive systems in MR, including technical and theoretical constraints and ethical and privacy concerns. We discuss potential solutions and strategies for dealing with such fundamental issues.
## 2. Mixed Reality
The predominant definition of MR is the one provided in the seminal work by Milgram and Kishino (Milgram and Kishino, 1999), referring to the merging of real and virtual worlds in a seamless and interactive environment. It is an interaction spectrum that blends physical and digital realities to create a new, immersive experience for the user.
Recently, this perspective has been reviewed by Skarbez et al. (Skarbez et al., 2019). Their revised taxonomy consists of three dimensions: immersion, coherence, and extent of world knowledge. Immersion is determined by a system's objective hardware device specifications and is related to the feeling of spatial presence experienced by the user (Sarbez et al., 2019). Coherence refers to the conformity of different sensory information perceived during an XR experience, leading to an increased plausibility illusion of the experience (Sarbez et al., 2019). The extent of world knowledge describes the degree of reality incorporated into an MR experience, influencing the user's real-world awareness (Sarbez et al., 2019). The authors focus on immersion and coherence and consider important environmental cues that influence the extent of world knowledge.
Latoschik and Wienrich (Latoschik and Wienrich, 2019) provide a third perspective that emphasizes that congruence activations between cognitive, perceptual, and sensory layers contribute to MR plausibility. The authors argue that device specifications, like the field of view or resolution, impact device-specific sensory congruence, while content transparency affects congruence. These congruences ultimately affect the plausible generation of spatial cues and spatial presence.
### Current Limitations for MR Systems Adoption
Despite technical and design advancements in Mixed Reality (MR) technology, significant limitations still prevent it from reaching its full potential and adoption by the general public and professionals. Now, we highlight four main factors that contribute to such limitations.
First, a limited field of view (FoV) represents an initial issue in many MR systems. FoV is the area that the user can see through the display, and it is often constrained by the physical size of the device's screen or lenses (Krause et al., 2019). A limited FoV can reduce immersion and realism and lead to visual discomfort (Sarbes et al., 2019), especially when the user must frequently turn their head to view the content (Sarbes et al., 2019).
Secondly, we identify limited interactivity as a primary constraint for MR adoption. MR systems often rely on gesture recognition or voice commands (Sarbes et al., 2019), which can be imprecise and unreliable, leading to frustration and reduced user engagement. This limitation can be a significant barrier to adopting MR in some domains, such as entertainment applications (Sarbes et al., 2019), i.e., gaming or when this adds up to an existing cognitive load, such as in education settings (Sarbes et al., 2019).
Third, while modern MR devices can display highly detailed virtual content alone, their embedding into physical reality hinders the efficiency of their plausibility (Sarbes et al., 2019), ultimately leading to reduced realism. On the contrary, high levels of realism can strengthen the efficiency of training simulations (Krause et al., 2019). Still, on the other side, when increasing details and amount of virtual content, we implicitly impact the MR visual complexity (Sarbes et al., 2019) that has been shown to influence behavioural performance and physiological arousal (Sarbes et al., 2019; Skarbez et al., 2019; Sarbes et al., 2019).
Finally, limited adaptability is another significant limitation of MR systems. Many MR applications are pre-defined and cannot adapt to the user's changing needs or physical state. This limitation can reduce the effectiveness of MR applications and lead to reduced user engagement and long-term usage.
## 3. Physiologically-adaptive systems in MR
Physiologically-adaptive systems are systems designed to interact with and respond to the physiological states and changes of the human body. These systems typically employ sensors and algorithms to monitor and analyze physiological signals such as ECG, EDA and EEG to drive interactions towards a specific state based on the cybernetics approach (Sarbes et al., 2019). The cybernetics approach found various applications ranging from developing new control channels (Sarbes et al., 2019) to task adaptation in response to changes in workload (Sarbes et al., 2019) and motivation (Sarbes et al., 2019).
However, most of the work focused on desktop settings. Only recently, MR settings are proliferating and enabling the creation of
environments and interactions far more engaging and expressive than traditional desktop programs (Han et al., 2017; Wang et al., 2018).
MR is now one of the most favourable environments for physiological computing systems. MR enables online adjustments and adaption of visualizations, digital content, blending, and interactions that resemble real-world ones. However, it is not currently feasible in physical settings (VR) or augmenting them (AR, AV). Introducing physiological interaction into MR can increase its ability to monitor and adapt to implicit human behaviour. Physiologically adaptable MR systems can identify user states and direct interaction characteristics toward a (shared) objective depending on physiological input.
### Benefits of Physiologically-Adaptive systems for MR
With regard to the limitations of MR adoption, we identify how physiologically-adaptive systems can enhance MR interaction and address possible usability constraints.
While the limited field of view (FoV) in MR devices is primarily a hardware limitation that may be challenging to address through physiological adaptivity alone, monitoring attention and gaze can still play a role in enhancing the user experience within the existing FoV limitations. Physiological inputs such as eye gaze and torso movements and their temporal alignment can be employed for attention, interest, and intent detection and as context and input (Sundundar et al., 2016). Moreover, EEG features such as alpha and theta oscillations discriminated between internally and externally directed attention in AR (Sundar et al., 2016) and VR settings (Sundar et al., 2017). This information can be used to dynamically adjust the field of view of the MR system, for example, by zooming in on areas of interest, providing multisensory cues to direct attention towards hidden areas, and blurring distracting information.
Limited interactivity refers to situations where the user has limited ability to control or manipulate the virtual objects in the MR environment. This can occur due to factors such as the complexity of the interface or the user's cognitive workload.
Limited interactivity can benefit from neuro-, and electrophysiological measures such as EEG and fNIRS for workload (Han et al., 2017; Wang et al., 2018), and attention detection (Sundar et al., 2016), enlarging the design space for interaction. For instance, if the user is experiencing cognitive overload or boredom (Han et al., 2017), the system can simplify the interface or adjust the task difficulty level to maintain engagement and interest. Additionally, if the user is experiencing unpleasant states such as frustration or anxiety (Sundar et al., 2016), the system can distract the user with positive stimuli to distract from their emotional state (Sundar et al., 2016) and maintain their attention on the task (Han et al., 2017).
Third, the limited realism could be controlled and adapted based on autonomic arousal, i.e., EDA or ECG, for leveraging its effect on the user's physiological activation. This physiological input can be used to adjust the level of MR visual fidelity, for example, by adding or removing sensory cues to enhance the user's emotional experience (Beng et al., 2018; Wang et al., 2018), or support target detection, when engaged in visual search (Beng et al., 2018; Wang et al., 2018).
Lastly, physiologically-adaptive systems are central to increasing reactivity and adaptability. Employing physiological data as a passive input and concurrently adapting either task or environmental features can allow for more dynamic interaction, controlling for undesirable states such as anxiety or boredom (Beng et al., 2018; Wang et al., 2018), improve motivational engagement (Han et al., 2017), and therefore allowing users to maintain focus on the current task and perform optimally.
Figure 2. In a biocybernetic control loop, the Adaptive System continuously process, and extract informative features from the physiological signals and detect the user state based on an algorithm. Thus, it adjusts its behaviour for device control for diverse applications and provides returns feedback to the user. This closed-loop allows for iterative adaptations and optimization of visualization, content and interaction.
### Potential Applications of Physiologically Adaptive Systems in Applied MR Settings
This combination of implicit physiological monitoring and MR environment adaptation can be defined as a closed-loop model. Since their original conception and design in the seminal work of Pope et al. (Pope et al., 2016), biocybrenetic closed loops have had many implications in human factors, and applied settings, such as aviation (Moschovitis and Denisova, 2017), healthcare (Moschovitis et al., 2017), and other high-demanding environments (Pope et al., 2016).
We envision three operational settings where physiologically-adaptive MR environments can be profitable: healthcare, education, and training.
Physiologically adaptive systems in the healthcare industry can deliver customized therapies suited to the patient's psychophysiological condition. Physiological measures, for example, can be used in mental health to assess physiological signals related to stress, anxiety, and depression. Such information improves the patient's exposure therapy, leveraging the degree of realism or intensity of the phobic stimuli presented either in VR or in AR (Moschovitis et al., 2017). Similarly, adaptive systems may be used in physical therapy to monitor patients' progress and offer real-time feedback on their movements, allowing therapists to change the intensity of exercises to guarantee optimal recovery and rehabilitation (Bouquet et al., 2017; Bohn et al., 2017).
In educational settings, physiologically adaptive systems can be used to improve learning outcomes. Recently, many companies and educational institutions have allocated considerable resources to transitioning from traditional desktop education to immersive MR applications, expecting that a higher level of immersion would correspond to increased motivation and learning. Physiological monitoring can aid in technology-based educational decision-making to assist cognitive, i.e., information processing (Pope et al., 2016), emotions,i.e., frustration (Pope et al., 2016), and motivation and metacognitive (Moschovitis et al., 2017), i.e., self-regulation behaviours of learners (Pope et al., 2016). Related to educational settings are also the professional training MR environments (Pope et al., 2016).
Finally, the entertainment industry can benefit from the design of physiologically-adaptive games (Pope et al., 2016). Besides adjusting the game realism to support immersion (Pope et al., 2016) or employing dynamic difficulty adjustments (Pope et al., 2016), adaptive gaming can pursue and drive interactions towards less socially acceptable goals. For example, Moschovitis and Denisova (Pope et al., 2016) showed how they could increase game engagement using a biofeedback-controlled game that elicited physiological responses associated with fear and anxiety. Their results show how stimuli perceived as unpleasant on the surface might result in a positive subjective outcome. Finally, gamification approaches can benefit entertainment purposes and be applied and generalized to different settings, such as therapy, treatment of anxiety and cognitive rehabilitation and training.
Ethical and Privacy Considerations for Implementing Physiologically Adaptive Systems in Mixed Reality
Within our perspective endorsing a progressive implementation and investigation for physiologically adaptive systems in MR, we have to foresee downsizes and considerations regarding ethics and privacy.
One of the primary ethical considerations for systems that employ data over which users do not have complete explicit control is the issue of informed consent. Users must be fully aware of how physiological data are collected, used, and shared. This is relevant when their data are employed for model training and validation.
Secondly, physiological states can underlie different emotional valences, implying that such systems might manipulate or influence users' emotions. Therefore, researchers must prioritize ethical design and inform participants about which state the system is optimizing for. Lastly, they should allow participants to return to a neutral affective state if users perceive their final state as undesirable. This is critical as users must retain control over the adaptation and state adjustment process.
Third, privacy concerns are associated with physiologically adaptive systems in MR. This perspective was already raised by Fairclough (Fairclough, 2017), highlighting how symmetrical interaction and adaptation between systems and users might lead to asymmetrical data usage and protection. Again, Hancock and Szalma (Hancock and Szalma, 2017) highlight that if a physiological computing system respects data protection rights, individuals should retain formal and legal ownership of their psychophysiological data. This implies that any third party should receive access to such information only with approval by the user. This is relevant considering that physiological data might not only underlie specific cognitive or affective states and be used for medical diagnostic purposes.
An initial compromise solution is using a privacy-by-design approach by embedding privacy considerations into every stage of the design and development process. This includes conducting privacy impact assessments, implementing privacy-enhancing technologies, and using privacy-preserving data collection in every implementation stage of the physiologically-adaptive systems.
## 5. Conclusion
In conclusion, MR technology holds great potential for creating immersive and engaging experiences, especially when employing physiologically adaptive systems that allow users to interact with personalized visualizations, contents and interactions. We highlighted how MR experiences could overcome challenges and limitations by embedding biocybrenetic paradigms in their systems and depicted future concerns for their implementation. HCI, MR, and adaptive systems research fields can all benefit from the enormous potential of adopting and exploring physiological computing and interaction paradigms. However, such opportunities will only be realized if these fundamental difficulties are addressed by present research in this area.
## Acknowledgments
Francesco Chiossi was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project ID 251654672, TRR 161.
|
2309.12683 | Accelerating the laser-induced phase transition in nanostructured FeRh
via plasmonic absorption | By ultrafast x-ray diffraction we show that the laser-induced
magnetostructural phase transition in FeRh nanoislands proceeds faster and more
complete than in continuous films. We observe an intrinsic 8 ps timescale for
nucleation of ferromagnetic (FM) domains in both types of samples. For the
continuous film, the substrate-near regions, which are not directly exposed to
light, are only slowly transformed to the FM state by domain wall motion
following heat transport. In contrast, numerical modeling of the plasmonic
absorption in the investigated nanostructure reveals a strong contribution near
the FeRh/MgO interface. On average, the absorption is larger and more
homogeneous in the nanoislands, enabling the phase transition throughout the
entire volume at the intrinsic nucleation timescale. | Maximilian Mattern, Jan-Etienne Pudell, Jon Ander Arregi, Jakub Zlámal, Radek Kalousek, Vojtěch Uhlíř, Matthias Rössle, Matias Bargheer | 2023-09-22T07:44:42Z | http://arxiv.org/abs/2309.12683v2 | # Accelerating the laser-induced phase transition in nanostructured FeRh via plasmonic absorption
###### Abstract
By ultrafast x-ray diffraction we show that the laser-induced magnetostructural phase transition in FeRh nanoislands proceeds faster and more complete than in continuous films. We observe an intrinsic 8 ps timescale for nucleation of ferromagnetic (FM) domains in both types of samples. For the continuous film, the substrate-near regions, which are not directly exposed to light, are only slowly transformed to the FM state by domain wall motion following heat transport. In contrast, numerical modeling of the plasmonic absorption in the investigated nanostructure reveals a strong contribution near the FeRh/MgO interface. On average, the absorption is larger and more homogeneous in the nanoislands, enabling the phase transition throughout the entire volume at the intrinsic nucleation timescale.
+
Footnote †: preprint: APS/123-QED
Reducing the structure-size of metallic ferromagnets to the nanoscale not only helps increasing the information storage density but also enables direct plasmonic coupling of light to the magnetic nano-bit for magnetoplasmonic control and readout [1; 2]. This is a particularly exciting perspective in the context of femtosecond optomagnetism [3] with ultrafast optical manipulation of magnetic properties such as a polarization control of two magnetic nanolayers mediated by plasmon-polaritons [4] and plasmonic enhanced all-optical switching in magnetic nanodisks [5]. Heat assisted magnetic recording (HAMR) [6; 7] already uses optical near fields to confine the magnetic switching in the new generation of magnetic hard drives to a few nanometers. Resonant magnetic x-ray scattering studies confirmed plasmonically enhanced ultrafast switching for nano-granular FePt thin films, which constitute the classical HAMR material [8].
The potential consequences of nanostructuring FeRh go well beyond plasmonic coupling. Lateral nanostructuring limits the number of independent nucleation sites through the antiferromagnetic-to-ferromagnetic (AF-FM) phase transition around 370 K, which changes the nature of magnetization reversal from multi-domain to single-domain and results in discrete avalanche-like jumps of the order parameter upon cooling [9; 10]. In thermal equilibrium, the phase transition that is accompanied by a 1% volume expansion, crucially depends on the lattice structure. The tetragonal distortion of the unit cell originating from an in-plane substrate-induced compression enhances the transition temperature [11; 10; 12]. In FeRh nanoislands, the partial relaxation of this tetragonal distortion reduces the transition temperature [10; 13]. Generally, in-plane nano-structuring unlocks in-plane expansion on the picosecond timescale in contrast to the exclusive out-of-plane expansion of laser-excited continuous thin films [14]. The three-dimensional nature of the picosecond strain response of nanoislands preserves bulk-like material-specific expansion properties and results in a complex strain response due to cross-talk of in- and out-of-plane expansion via the Poisson effect [15; 16; 17; 18].
Previous experiments studied the laser-induced phase transition in FeRh by the emergence of magnetization [19; 20; 21; 22], changes in the electronic structure [23] and the rise of the larger FM lattice constant [24; 25; 26]. Probing the structural order parameter by ultrafast x-ray diffraction (UXRD), we recently disentangled FM domain nucleation and growth in inhomogeneously excited FeRh continuous films, whose thickness exceed the optical penetration depth [26]. We identified a universal 8 ps nucleation timescale in FeRh, which does not depend on the film thickness and temperature nor on applied laser fluence and magnetic field [26]. The effects of nanostructuring on the coupled ultrafast dynamics of demagnetization [17], remagnetization [27] and strain [15; 16; 17] have been thoroughly studied for FePt. Ultrafast experiments on FeRh nanoislands that study the influence of the in-plane expansion, reduced number of nucleation sites and plasmonic excitation are lacking up to now.
In this letter, we explore for the first time the kinetics of the laser-driven phase transition in FeRh nanoislands by probing the rise of a larger lattice constant parameterizing the FM phase as structural order parameter via UXRD. In order to access the effect of finite lateral dimensions, we compare the results to a similarly thick continuous FeRh film as reference. In the nanoislands,
the AF-FM phase transition drives a partial in-plane expansion of the nanoislands both in equilibrium and on ultrafast timescales. Upon laser excitation, we observe the same 8 ps nucleation timescale in both samples indicating an intrinsic property irrespective of the sample morphology. However, while we observe a relatively slow heat transport-driven domain growth in the thin film, the phase transition of the nanostructured film occurs precisely on the intrinsic timescale of domain nucleation. By modeling the absorption of the nanostructures, we relate this acceleration of the phase transition to a homogeneous optical excitation due to plasmonic effects enabled by the size of the metal islands below the excitation wavelength.
Figures 1(a) and (b) sketch the continuous and nanostructured FeRh film grown on MgO(001) substrates. The continuous 55 nm thick FeRh(001) film was grown by magnetron sputtering from an equiatomic FeRh target [11] and capped by 2 nm of Pt. The nanostructured sample is composed of epitaxial FeRh(001) nanoislands formed by solid state dewetting of an epitaxial FeRh(001) film via self-assembly resulting in maze-like structures [13] with a mean height of 52 nm.
Static and time-resolved reciprocal space maps around the (002) FeRh Bragg peak are recorded at the KMC-3 XPP endstation at BESSY II in the low-alpha operation mode [28] with monochromized 9 keV hard x-ray photons. The diffracted intensity in Figs. 1(c) and (f) displays the emergence of an additional Bragg peak at lower values of the out-of-plane reciprocal lattice coordinate \(q_{z}\) when the temperature approaches the mean transition temperature of the thin film (370 K) and the nanoislands (350 K), respectively. The integrated intensity of this Bragg peak is directly proportional to the volume of FeRh exhibiting the FM phase [24] and thus parameterizes the FM phase during the temperature-driven AF-FM phase transition. The proportion of the FM Bragg peak in the total intensity yields the temperature-dependent FM volume fraction \(V_{\text{FM}}\). Figures 1(d) and (g) compare this structural parameterization of the phase transition (symbols) to the macroscopic magnetization normalized to its maximum (solid lines) serving as complementary order parameter of the FM phase. The magnetization is measured via vibrating sample magnetometry (VSM) using a QuantumDesign VersaLab magnetometer, which results in a broadening of the hysteresis by the heterogeneous transition temperature at different sample sites in contrast to the narrow hysteresis of the structural order parameter locally (\(300\times 300\,\mathrm{\SIUnitSymbolMicro m}^{2}\)) measured via x-ray diffraction.
The comparison of the two samples reveals a dependence of the AF-FM phase transition in thermal equilibrium on the sample morphology. The enhanced surface-to-volume ratio of the nanoislands results in a noticeable residual FM phase that persists below the transition temperature \(T_{\text{T}}\) at the symmetry breaking surface [29] and at the film-substrate interface [30]. In addition, the small lateral extent of the islands partially relaxes the substrate-induced tetragonal distortion of FeRh, which lowers the transition temperature for the nanoislands [10; 11; 13]. This is indicated by the lower mean out-of-plane lattice constant \(d\) with respect to the continuous film (see Figs. 1(e) and (h)) given by the center-of-mass (COM) of the diffracted intensity via \(d=4\pi/q_{z,\text{COM}}\). This applies in particular to the out-of-plane expansion associated with the phase transition. While we find 0.4% expansion for the nanoislands close to the bulk value of 0.3 % [31], the substrate-induced clamping of the in-plane expansion suppresses the Poisson effect [14] and results in an out-of-plane expansion of 0.6% for the thin film. Accounting for the different substrate-induced constraints of the in-plane expansion, the out-of-plane expansion of the FeRh samples is de
Figure 1: **Morphology-dependent phase transition in thermal equilibrium:** Sketch of the UKRD experiment mapping the reciprocal space via \(\theta-2\theta\)-scans and the sample structure of the continuous film (a) and the nanoislands (b). Panels (c–e) and (f-h) characterize the equilibrium AF-FM phase transition in the continuous film and the nanostructures, respectively. (c, f) The diffracted intensity (grey symbols) is the superposition of an AF and an arising FM Bragg peak at a larger out-of-plane lattice constant during heating above \(T_{\text{T}}\). (d, g) Temperature-dependent ferromagnetic volume fraction \(V_{\text{FM}}\) determined by the relative integrated intensity of the FM Bragg peak (symbols) as structural order parameter and the magnetization normalized to its maximum value as magnetic order parameter (solid lines). (e, h) Temperature-dependent lattice constant (symbols) modeled by Eq. (1) using bulk expansion parameters (solid lines).
scribed by [14]:
\[\alpha_{\rm eff}=\alpha_{\rm bulk}(T)+2\chi\frac{c_{1133}}{c_{3333}}\left(\alpha_ {\rm bulk}(T)-\alpha_{\rm MgO}\right)\, \tag{1}\]
where \(\alpha_{\rm bulk}(T)\) and \(\alpha_{\rm MgO}=10.5\cdot 10^{-6}\rm K^{-1}\) denote the thermal expansion coefficients of bulk FeRh and MgO, respectively. The expansion of FeRh \(\alpha_{\rm bulk}(T)\) is given by the expansion coefficients \(\alpha^{\rm FM}=6.0\cdot 10^{-6}\rm K^{-1}\) and \(\alpha^{\rm AF}=9.7\cdot 10^{-6}\rm K^{-1}\) in the AF and FM phase [32] and the expansion of 0.3 % across the AF-FM phase transition [31], considering the temperature-dependent volume fraction in the FM phase \(V_{\rm FM}(T)\), which we derived from the integrated intensity of the FM Bragg peak in Figs. 1(c) and (f). The elastic constants of FeRh \(c_{1133}\) and \(c_{3333}\) quantify the effect of the in-plane expansion on the out-of-plane expansion via the Poisson effect [14] and \(\alpha_{\rm eff}\) denotes the modified expansion coefficient of the samples depending on the parameter \(\chi\). This parameter measures the epitaxy to the substrate, where \(\chi=0\) corresponds to pure bulk-like in-plane expansion and \(\chi=1\) to an in-plane expansion completely determined by the MgO substrate.
Our modeling of the temperature-dependent lattice constant in Figs. 1(e) and (h) (symbols) by Eq. (1) (solid line) yields excellent agreement for \(\chi=1\) and \(\chi=0.42\) for the continuous thin film and the nanoislands, respectively. While the thin film purely follows the in-plane expansion of the MgO substrate (\(\chi=1\)), the nanoislands behave partially bulk-like (\(\chi=0.42\)). This relaxation of the in-plane constraints is expected to increase towards the surface and to depend on the in-plane dimensions of the different nanoislands [33].
In the UKRD experiment, the FeRh samples are excited by a 600 fs p-polarized pump pulse with a central wavelength of 1028 nm that is incident at \(\approx 30^{\circ}\) with respect to the sample surface. As sketched in Fig. 1(a), we probe the emergence of the FM Bragg peak that parameterizes the laser-induced AF-FM phase transition by 17 ps 9 keV hard x-ray pulses [28] performing symmetric \(\theta\)-2\(\theta\) scans around the (002) Bragg reflection at \(28^{\circ}\).
Figures 2(a) and (b) display the diffracted intensity along \(q_{z}\) before and after an excitation with 12.0 mJ/cm\({}^{2}\) at 340 K for the thin film and with 5.2 mJ/cm\({}^{2}\) at 230 K for the nanoislands, respectively. The emerging FM Bragg peaks indicate the optically induced AF-FM phase transition for both samples. The AF and FM Bragg peaks are well separated for the thin film. For the nanoislands, an ultrafast in-plane expansion on short timescales is enabled by their small lateral extent [14]. The concomitant transient out-of-plane Poisson-contraction results in less separated Bragg peaks (Fig. 2(b)) for the nanoislands. This indicates a reduced tetragonal distortion of the unit cell upon laser-excitation and the emergence of more cubic FM unit cells upon nucleation in the nanoislands as already discussed for
Figure 2: **Reduced laser-induced out-of-plane expansion in FeRh nanostructures:** (a) Transient Bragg peaks of the thin film for an excitation of 12.0 mJ/cm\({}^{2}\) at 340 K dissected into the FM (green) and AF (blue) Bragg peak that are well-separated in reciprocal space. (b) In the nanoislands, the FM (pink) and AF (purple) Bragg peak are less separated due to the partial in-plane expansion of the unit cell across the laser-induced phase transition for 5.2 mJ/cm\({}^{2}\) at 230 K. The data for different pump-probe delays are vertically offset to improve visibility.
Figure 3: **Nucleation-dominated phase transition in FeRh nanoislands:** (a) Transient FM volume fraction of the nanoislands at \(T=190\) K for various fluences \(F\). (b) Same for \(T=230\) K, which increases the conversion to the FM state at low fluence. (c) Temperature series at a relatively low fluence of \(F=8.6\) mJ/cm\({}^{2}\) for the thin film. (d) Same for \(F=12.0\) mJ/cm\({}^{2}\). The two-step rise of \(V_{\rm FM}\) in the thin film (c and d) indicates a growth of the nucleated FM domains into the depth of the layer driven by near-equilibrium heat transport. In all panels, solid lines denote the kinetics of FM domain nucleation according to Eq. (2) convoluted with the time-resolution given by the duration of the x-ray pulse.
addition to the lattice constant change across the phase transition, the dynamics of the laser-induced phase transition also depends on the sample morphology. While the integrated intensity of the FM Bragg peak barely changes between 40 and 240 ps for the nanoislands, the FM Bragg peak increases after 40 ps for the thin film.
Figure 3 displays the resulting transient FM volume fraction \(V_{\mathrm{FM}}\) for both samples under various excitation conditions. The solid lines denote the expected dynamics for nucleation of FM domains at independent sites described by Eq. (2) with the previously identified universal nucleation timescale \(\tau=8\) ps [26] convoluted with the 17 ps-long x-ray pulse limiting the time-resolution. The final FM volume fraction \(V_{\mathrm{FM}}^{*}\) is adjusted to the experimental value of \(V_{\mathrm{FM}}(t=40\) ps) for the respective measurement and we include a finite residual FM phase in the nanoislands being present before excitation:
\[V_{\mathrm{FM}}(t)=V_{\mathrm{FM}}^{*}\cdot\left(1-e^{-t/\tau}\right)\;. \tag{2}\]
For the nanoislands, the transient FM volume fraction in Figs. 3(a-b) is well described by Eq. (2) indicating that nucleation dominates the laser-induced AF-FM phase transition. With increasing fluence and initial sample temperature a larger fraction of the nanoislands is excited above the critical threshold characteristic for first-order phase transitions [20; 24], which results in an enhanced \(V_{\mathrm{FM}}^{*}\). Figs. 3(c-d) show that within the first 40 ps the laser-induced phase transition in the continuous film is equally well described by Eq. (2) (solid lines) with the same 8 ps nucleation timescale. Thus the nucleation kinetics of the AF-FM phase transition in FeRh is insensitive to the substrate-induced or transient tetragonal distortion of the unit cell that is smaller in case of the nanoislands as identified by the less separated AF and FM Bragg peaks shown in Fig. 2.
However, we observe a second slow contribution to the rise of \(V_{\mathrm{FM}}\) in the continuous film at initial sample temperatures above 300 K. In a previous publication [26], we revealed this two-step rise of the FM phase to originate from a nucleation of FM domains within the optically excited near surface region and a subsequent growth of the domains into the depth of the inhomogeneously excited layer driven by near-equilibrium heat transport. The equilibration of the phonon temperature within the continuous film via heat diffusion slowly heats the backside of the film and induces the phase transition if the temperature surpasses \(T_{\mathrm{T}}\). For initial sample temperatures only slightly below \(T_{\mathrm{T}}\) and higher excitation fluences a larger fraction of the film is heated above \(T_{\mathrm{T}}\) and undergoes the phase transition [26]. This results in an enhanced delayed contribution arising from domain growth as displayed in Figs. 3(c-d).
To explain the absence of such a heat transport-driven domain growth in the nanoislands we calculate their optical absorption for the experimental excitation conditions in COMSOL utilizing the refractive index of MgO \(n_{\mathrm{MgO}}=1.72\)[34] and of FeRh \(n_{\mathrm{FeRh}}=4.67+5.58i\). This was measured via spectroscopic ellipsometry for a similar FeRh film, grown under the same conditions. For this we reproduce the topography of the nanoislands characterized by AFM (Fig. 4(a)) in COMSOL by ellipsoids utilizing an algorithm for rapid contour detection [35] (see Fig. 4(b)). Figure 4(c) displays the local power absorption \(P_{\mathrm{abs}}\) of the nanostructures that reveals the existence
Figure 4: **Optical excitation of nanostructured FeRh:** Re-build of the topography of the FeRh nanostructures characterized by AFM (a) in COMSOL (b). (c) Local absorbed power per area of the FeRh nanostructures calculated in COMSOL by solving the Maxwell equations. (d) Local optical penetration depth determined from \(P_{\mathrm{abs}}\) as function of the depth relative to the local height \(h\). (e) Absorption of different nanoislands as function of \(z\) at \(y=3.2\,\mathrm{\SIUnitSymbolMicro m}\). (f) Integrated absorption of the nanoislands as function of \(z\) (blue symbols). The purple solid line displays the \(z\)-dependent absorption of an hypothetical ensemble of continuous FeRh films resembling the height distribution of the nanoislands and the grey line denotes the absorption profile of the continuous 55 nm thick FeRh film.
of plasmonic hot-spots with drastically enhanced absorption. By fitting an exponential decay function to the local \(z\)-dependent absorption (FeRh-MgO interface corresponds to \(z=0\)), we find a large spread of the optical penetration depth \(\delta_{\text{p}}\). Figure 4(d) shows this distribution relative to the semi-infinite medium value \(\delta_{\text{p,0}}=14.7\,\)nm. Yellow color-code indicates a locally strongly enhanced optical penetration depth due to nanostructuring. The exemplary lineout of the dissipated power at \(y=3.2\,\)um in Fig. 4(e) depicts the \(z\)-dependent absorption as a function of the in-plane \(x\) coordinate, with characteristic plasmonic enhancement near the FeRh-MgO interface [36] that is responsible for the local absorption hot-spots shown in Fig. 4(c). This increases the absorption at depth that in thin films receives only a negligible exponentially damped amount of light energy.
Both the penetration depth enhancement and the optical power dissipation build-up near the FeRh-MgO interface make the absorption of the pump-pulse in the nanostructures more homogeneous with respect to the out-of-plane \(z\)-coordinate than in a continuous thin film. The average total absorption in the nanostructures as function of the distance from the FeRh-MgO interface (\(z=0\)) is displayed by symbols in Fig. 4(f). In addition, the grey solid line denotes the \(z\)-dependent absorption of a \(55\,\)nm FeRh film scaled by the surface coverage of the nanoislands (\(49\,\%\)) in the studied sample. Its thickness is comparable to the average nanoisland height of \(52\,\)nm. Lastly, the purple solid line represents the integrated \(z\)-dependent optical absorption from each pixel assuming its local profile is identical to that of a continuous film of an equivalent thickness. The decrease of the absorption for large \(z\) values (purple line) agrees with the COMSOL simulation additionally including plasmonic effects (symbols) and exclusively originates from the decreasing number of nanoislands higher than the \(z\) value. However, the average absorption of nanostructures (symbols) for \(z<30\,\)nm is much larger than that predicted by the pixel-integrated absorption of equivalent thin films, which neglects the plasmonic enhancement near the FeRh-MgO interface. Comparing the area under the grey and dotted curves in Fig. 4(f), we find that the total \(z\)-integrated optical absorption of the nanostructures amounts to \(34\,\%\) of the incident power, which exceeds the absorption of the \(55\,\)nm-thick continuous FeRh film with the same value by a factor of \(1.5\). In essence, the optical excitation of the nanostructures is significantly more homogeneous along their vertical extent than for the thin film case, where only the near surface region receives a sizeable light intensity.
This stronger and almost homogeneous excitation of the complete volume of the nanostructures supports a nucleation-driven phase transition throughout the entire nanoislands. This suppresses the slow phase transition by domain wall motion observed for the thin film (Figs. 3(c-d)), which is driven by near-equilibrium heat-transport. This drastically accelerates the laser-induced phase transition in FeRh nanostructures with small lateral extension that is even more efficiently driven due to the overall enhanced absorption.
In summary, we studied the morphology dependence of the laser-induced AF-FM phase transition by comparing a continuous and a nanostructured thin FeRh film. We find an ultrafast in-plane expansion of the nanoislands, whereas the thin FeRh film is pinned to the MgO. This results in a less tetragonal distortion of the unit cell across the phase transition, however, it has no influence on the nucleation timescale of the FM domains. Instead, only plasmonic effects change the dynamics of the phase transition: By modelling the spatially resolved optical absorption of the FeRh nanostructures, we identified a plasmon-enhanced absorption near the FeRh-MgO interface and an enhanced optical penetration depth. This results in a homogeneous excitation of the nanoislands, which drives a nucleation of FM domains on an \(8\,\)ps timescale within the volume of the FeRh nanostructures and makes slow heat-transport driven domain growth irrelevant. This accelerates the phase transition in comparison with the thin film that exhibits nucleation only within the optically excited near-surface region and shows a subsequent slow growth of the FM phase into the depth of film at initial sample temperatures slightly below the transition temperature.
We acknowledge the DFG for financial support via Project-No. 328545488 - TRR 227 project A10 and the BMBF for funding via 05K22IP1. Access to the CEITEC Nano Research Infrastructure was supported by the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic under the project CzechNanoLab (LM2023051). Measurements were carried out at the KMC3-XPP instrument at the BESSY II electron storage ring operated by the Helmholtz-Zentrum Berlin fur Materialien und Energie.
|
2309.05499 | Zero-Shot Co-salient Object Detection Framework | Co-salient Object Detection (CoSOD) endeavors to replicate the human visual
system's capacity to recognize common and salient objects within a collection
of images. Despite recent advancements in deep learning models, these models
still rely on training with well-annotated CoSOD datasets. The exploration of
training-free zero-shot CoSOD frameworks has been limited. In this paper,
taking inspiration from the zero-shot transfer capabilities of foundational
computer vision models, we introduce the first zero-shot CoSOD framework that
harnesses these models without any training process. To achieve this, we
introduce two novel components in our proposed framework: the group prompt
generation (GPG) module and the co-saliency map generation (CMP) module. We
evaluate the framework's performance on widely-used datasets and observe
impressive results. Our approach surpasses existing unsupervised methods and
even outperforms fully supervised methods developed before 2020, while
remaining competitive with some fully supervised methods developed before 2022. | Haoke Xiao, Lv Tang, Bo Li, Zhiming Luo, Shaozi Li | 2023-09-11T14:42:04Z | http://arxiv.org/abs/2309.05499v3 | # Zero-shot Co-salient Object Detection Framework
###### Abstract
Co-salient Object Detection (CoSOD) endeavors to replicate the human visual system's capacity to recognize common and salient objects within a collection of images. Despite recent advancements in deep learning models, these models still rely on training with well-annotated CoSOD datasets. The exploration of training-free zero-shot CoSOD frameworks has been limited. In this paper, taking inspiration from the zero-shot transfer capabilities of foundational computer vision models, we introduce the first zero-shot CoSOD framework that harnesses these models without any training process. To achieve this, we introduce two novel components in our proposed framework: the group prompt generation (GPG) module and the co-saliency map generation (CMP) module. We evaluate the framework's performance on widely-used datasets and observe impressive results. Our approach surpasses existing unsupervised methods and even outperforms fully supervised methods developed before 2020, while remaining competitive with some fully supervised methods developed before 2022.
Haoke Xiao\({}^{1}\) Lv Tang\({}^{2}\) Bo Li\({}^{2}\) Zhiming Luo\({}^{1}\) Shaozi Li\({}^{1}\)\({}^{1}\)+\({}^{1}\) Institute of Artificial Intelligence, Xiamen University, Xiamen, China
\({}^{2}\) VIVO Mobile Communication Company Ltd, Shanghai, China Zero-shot Co-saliency Detection, Foundational Computer Vision Model.
Footnote †: Email: [email protected], [email protected], [email protected], [email protected] and [email protected].
## 1 Introduction
Co-salient object detection (CoSOD) is a task that seeks to replicate the human visual system's ability to identify common and salient objects from a set of related images. One of the unique challenges in CoSOD is that co-salient objects belong to the same semantic category, but their specific category attributes remain unknown. These distinctive characteristics have made CoSOD an emerging and demanding task that has gained rapid traction in recent years [1, 2, 3].
Detecting co-saliency fundamentally hinges on accurately modeling the inter relation within an image group. To tackle this challenge, various meticulously designed architectures have been proposed, including RNN-based methods [5, 9], CNN-based methods [10, 6, 11], and Transformer-based methods [8, 12]. While these methods have achieved impressive performance, they often rely on small-scale datasets and require the integration of complex network modules. It's important to highlight that previous studies [1, 9] show that changing the training data while keeping the same network architecture, or modifying the backbone network while using the same training data can significantly impact network performance. This suggests that improving network performance might be attainable by employing a more robust backbone network or higher-quality training datasets. These considerations prompt us to reevaluate whether addressing the CoSOD task necessitates the design of intricate and diverse modules, or if an alternative approach should be explored.
Recently, foundational computer vision (CV) models, such as SAM [13] and DINO [14], have emerged. These models, once trained, can be seamlessly applied to various downstream tasks in a zero-shot manner, eliminating the need for dataset-specific fine-tuning. This prompts us to explore whether these CV foundational models can be harnessed for CoSOD. However, existing CV foundational models, like SAM, are tailored for single-image tasks and lack the capacity to discern inter-saliency relations within an image group. Moreover, manually supplying SAM with inter-saliency or group prompts, which aid in co-saliency map generation, is impractical due to the nature of the CoSOD task.
To tackle the aforementioned challenges, we present an innovative zero-shot CoSOD framework that leverages foundational computer vision models. As depicted in Fig. 1, our framework consists of two main components: group prompt generation (GPG) and co-saliency map generation (CMP). In the GPG module, we initially extract high-level semantic information from each image using the CV foundational model. We also explore the supplementation of low-level spatial de
Figure 1: Left: The architecture of our proposed zero-shot CoSOD framework. Right: The performance of our proposed zero-shot CoSOD framework. GWD [4], RCAN [5], ICNet [6], CADC [7] and UFO [8] are five typical methods.
tails using Stable Diffusion (SD) [15], which may not be captured by the foundational model. Subsequently, we combine these pieces of information to generate group prompts. These prompts, created by the GPG module, serve as input for the CMP module. As depicted in Fig. 1, our network surpasses methods developed before 2021. Our key contributions are:
* We take the pioneering step of introducing a zero-shot CoSOD framework, potentially inspiring researchers to address the CoSOD from a fresh perspective.
* To address the limitations of existing CV foundational model when applied to CoSOD task, we further design the GPG and CMP modules.
* We validate our zero-shot CoSOD framework on three widely used datasets (CoCA [16], CoSOD3k [2] and Cosal2015 [17]), and the performance shows the effectiveness of our proposed zero-shot framework.
## 2 Method
In Fig. 2, since the CMP module directly utilizes SAM, our emphasis lies in providing an in-depth description of the key components within GPG. GPG encompasses feature extraction, group center proxy generation, and TopK selection.
### Feature Extraction
High-level Feature Extraction.Existing works [18, 14] demonstrate that self-supervised ViT features, as exemplified by DINO, contain explicit information for semantic segmentation and excel as KNN classifiers. In essence, DINO is adept at accurately extracting the semantic content of each image, a vital aspect in discerning the group features within an image set. Herein, we choose the 11th layer feature \(\mathcal{F}_{DINO}\) to represent the semantic information of each image.
Low-level Feature Extraction.While DINO excels in providing substantial high-level semantics, it lacks in delivering nuanced low-level spatial information. In the second column of Fig. 3, the group feature generated solely through DINO would lack low-level detailed information. As emphasized in previous studies [19, 9], both low-level and high-level features are pivotal for modeling group features. However, it's noteworthy that there is a research gap regarding the supplementation of low-level spatial information to features extracted by DINO in a zero-shot manner.
In our proposed network, the inclusion of a pre-trained model that specializes in low-level spatial information becomes crucial, particularly in scenarios lacking strong texture cues. Such a model can effectively complement the low-level spatial information extracted by DINO. Notably, SD [15] has recently showcased its exceptional ability to generate high-quality images. This underscores its potential for robustly representing images, encompassing both content and spatial information. Consequently, our primary objective is to explore whether SD features can enhance the establishment of inter-relationships when combined with DINO.
Figure 3: The generated group features.
Figure 2: The architecture of our proposed zero-shot CoSOD framework. Feature extraction is accomplished by utilizing **DINO** and **SD** to extract both high-level and low-level information. The CMP module employs **SAM** to generate the co-saliency maps. Importantly, all parameters in the network remain frozen, eliminating the need for additional training.
The architecture of SD comprises three key components: an encoder \(\mathcal{E}\), a decoder \(\mathcal{D}\), and a denoising U-Net \(\mathcal{U}\) operating within the latent space. We begin by projecting an input image \(x_{0}\) into the latent space through the encoder \(\mathcal{E}\), resulting in a latent code \(z_{0}\). Subsequently, we add Gaussian noise \(\epsilon\) to the latent code according to a predefined time step \(t\). Lastly, with the latent code \(z_{t}\) at time step \(t\), we then extract the SD features \(\mathcal{F}_{SD}\) utilizing the denoising U-Net:
\[\mathcal{F}_{SD}=\mathcal{U}(z_{t},t),\;z_{t}=\sqrt{\bar{a}_{t}}+\sqrt{1-\bar{ a}_{t}}\epsilon,\;z_{0}=\mathcal{E}(x_{0}). \tag{1}\]
In accordance with the approach introduced in [20], we combine features extracted from different decoder \(\mathcal{D}\) layers, specifically layers 2, 5, and 8, to capture multi-scale features. However, a direct concatenation of features from these three layers results in an excessively high-dimensional feature vector, approximately 5440 dimensions. To address this issue, we employ Principal Component Analysis (PCA) for each feature layer. Subsequently, we upsample lower-resolution features (i.e., layers 2 and 5) to match the resolution of the higher-resolution layer (i.e., layer 8) before concatenation.
**Feature Fusion.** Expanding on the aforementioned discussions, we introduce a simple yet remarkably effective fusion strategy aimed at harnessing the advantages of both SD and DINO features. The core idea entails independently normalizing both sets of features to ensure consistency in their scales and distributions, subsequently concatenating them:
\[\mathcal{F}_{FUSE}=Concat(\|\mathcal{F}_{SD}\|_{2},\|\mathcal{F}_{DINO}\|_{2}). \tag{2}\]
In the third column of Fig. 3, the fused feature aids in generating a smoother and more resilient group feature.
### Group Center Proxy Generation and TopK Selecting
We acquire the feature \(\mathcal{F}_{FUSE}\in\mathbb{R}^{C\times H\times W}\) for each image, a valuable asset for robust group information generation. However, the subsequent challenge lies in disseminating the group information to individual images. In existing CoSOD methods, group information is often expressed as a feature map, directly concatenated with the original image. This is followed by a trainable decoder for co-saliency map prediction. In our proposed zero-shot framework, training a new decoder network using the CoSOD training dataset is not feasible. To tackle this challenge, we transform the representation of group information and introduce the pixel group center proxy generation and TopK selecting mechanism. Through this innovative approach, we can create group prompt points to represent co-salient objects in each images, which, in turn, prompt SAM to generate the corresponding maps.
Assuming an image group comprises \(N\) images, we concatenate these features to generate the group feature \(\mathcal{F}_{G}\in\mathbb{R}^{N\times C\times H\times W}\). Subsequently, we reshape \(\mathcal{F}_{G}\) into the shape \(\mathbb{R}^{NHW\times C}\), denoting each pixel embedding as \(\mathcal{F}_{G}^{In}\), where \(l\in[1,NHW]\) and \(n\) means the \(n\)-th image. \(\mathcal{F}_{G}^{In}\) means that the \(l\)-th pixel embedding belongs to \(n\)-th image. Then, we use the easy averaging operation on these pixel embeddings to generate the group center proxy \(\mathcal{F}_{c}\). Moreover, to make the group center proxy focus on salient regions, we use the unsupervised SOD method TSDN [21] to filter pixels belonging to non-salient regions in \(\mathcal{F}_{G}^{In}\), generating the salient pixels \(\mathcal{F}_{G}^{In-s}\). The process of the generation of \(\mathcal{F}_{c}\) is written as:
\[\mathcal{F}_{c}=Avg\left\{\mathcal{F}_{G}^{In-s}\right\}\in\mathbb{R}^{C}. \tag{3}\]
Concretely, for \(N\)-th image which contains \(L\) salient pixels, we calculate the correlation score between \(\mathcal{F}_{c}\) and \(\mathcal{F}_{G}^{LN}\), and use TopK to select the point at position \(P^{N}\) in the image \(N\), which can represent common co-salient objects in this image:
\[S^{LN}=\mathcal{F}_{c}\otimes\mathcal{F}^{LN},P^{N}=\text{TopK}(S^{LN})\in \mathbb{R}^{K}, \tag{4}\]
where \(\otimes\) means matrix multiplication. Finally, for the \(N\)-th image, the generated prompts at position \(P^{N}\) (Fig. 1 and Fig. 4) and the corresponding original image is sent to CMP to generate the co-saliency maps. We set \(K=2\) in this paper.
## 3 Experiments
### Experimental Setup
**Implementation Details.** We employ the Stable Diffusion v1-5 and DINov2 models as our feature extractors, with the DDIM timestep \(t\) in the denoising process set to be 50 by default. We use the SAM with the smallest vit-b backbone. \(N\) is the total image numbers of one certain image group. All experiments are conducted on a single RTX 3090 GPU.
**Datasets.** We employ three benchmark datasets, including Cosal2015 [22], CoSOD3k [2] and CoCA [16], to evaluate our approach. Cosal2015 comprises 50 groups with a total of 2015 images. It presents numerous challenges such as complex environments. CoSOD3k, the largest-scale and most comprehensive benchmark, offers 160 groups with a total of 3000 images. CoCA features 80 groups with a total of 1297 images, posing a challenge due to the presence of multiple objects, including relatively small co-salient objects.
**Evaluation Metrics.** we employ three widely used criteria: (1) F-measure \((F_{\beta}^{mean})\), representing the harmonic mean of precision and recall values, is calculated using a self-adaptive threshold. (2) Structure Measure \((S_{m})\), which is utilized to assess the spatial structural similarities of saliency maps. (3) Mean Absolute Error \((MAE)\), which quantifies the average L1 distance between ground truth maps and predictions.
### Comparison Methods
We compare our method with 7 fully-supervised methods: CSMG [23], ICNet [6], CADC [7], GLNet [24], TCNet [25], CoRP [1] and UFO [8]. The unsupervised CoSOD models in barely explored, we only compare our method with 2 methods: PJO [26] and GOMAG [27].
### _Quantitative and Qualitative Evaluation._
Table. 1 reveals that our proposed zero-shot CoSOD network consistently outperforms all other state-of-the-art (SOTA) unsupervised CoSOD methods across all evaluation metrics. These results underscore the efficacy of our zero-shot CoSOD network. In comparison to supervised CoSOD methods, our approach surpasses methods published in 2019 and achieves competitive performance compared to those published from 2020 to 2023, as evidenced by certain metrics. _It's worth noting that all modules in our network employ basic design principles, such as simple averaging operations for center proxy point feature extraction. In this configuration, our zero-shot framework achieves such remarkable performance, instilling confidence in researchers to explore CoSOD tasks from a novel zero-shot perspective._
Fig. 4 presents qualitative results from our proposed method, showcasing its ability to accurately detect co-salient objects even in complex scenes.
### _Ablation Analysis_
First, we propose that both high-level and low-level information are pivotal for generating group features. To validate this assertion, we conduct experiments. In the penultimate row of Table. 1, utilizing only the high-level features extracted from DINO already achieves competitive performance within the proposed network. However, it does not enable our zero-shot framework to completely surpass the CSMG method in all metrics. Additionally, the incorporation of low-level information from SD further enhances performance.
Another noteworthy contribution of this paper is the assertion that features extracted from foundational models are valuable for generating group features. Consequently, when we incorporate these group features into existing frameworks, including TSCoSOD [9], TCNet [25] and GCoNet+ [28], we observe further performance improvements, as demonstrated in Table. 2. The values in parentheses represent results after retraining with the inclusion of new group features. This underscores that, beyond the framework itself, this paper contributes a zero-shot group feature generation approach.
## 4 Conclusion
In this paper, we introduce an innovative zero-shot CoSOD framework. Leveraging the feature extraction capabilities of established DINO and SD, we have devised the GPG and CMP components, enabling the application of existing foundational models to the zero-shot CoSOD task. Our experiments demonstrate that these foundational models can effectively generate resilient group features, and our proposed framework can reasonably address the zero-shot CoSOD task. We envision that our work will serve as a cornerstone for the zero-shot/unsupervised CoSOD task, inspiring researchers to approach CoSOD from a novel perspective.
\begin{table}
\begin{tabular}{c|c|c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Performance Level**} & \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Year**} & \multicolumn{2}{c|}{**CoCA**} & \multicolumn{3}{c|}{**CoSODk**} & \multicolumn{3}{c}{**CoCoCoal2015**} \\ \cline{4-13} & & & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) \\ \hline \multirow{3}{*}{**Future**} & TCNet & TCSVT 2023 & Supervised & 0.685 & 0.548 & 0.101 & 0.832 & 0.797 & 0.668 & 0.870 & 0.859 & 0.054 \\ & UPO & TAM 2023 & Supervised & 0.697 & 0.555 & 0.095 & 0.819 & 0.783 & 0.073 & 0.860 & 0.848 & 0.064 \\ & CoRP & TPAMI 2023 & Supervised & 0.732 & 0.663 & 0.093 & 0.850 & 0.824 & 0.057 & 0.884 & 0.884 & 0.044 \\ \hline \multirow{3}{*}{**Competitive**} & ICNet & NeurIPS 2020 & Supervised & 0.651 & 0.503 & 0.148 & 0.780 & 0.734 & 0.097 & 0.856 & 0.846 & 0.058 \\ & CADC & ICCV 2021 & Supervised & 0.681 & 0.803 & 0.132 & 0.801 & 0.742 & 0.096 & 0.866 & 0.825 & 0.064 \\ & GLNet & TCSV 2022 & Supervised & 0.591 & 0.425 & 0.188 & - & - & - & 0.855 & 0.849 & 0.060 \\ \hline \multirow{3}{*}{**Outperform**} & CSMG & CVPR 2019 & Supervised & 0.632 & 0.494 & 0.124 & 0.711 & 0.662 & 0.157 & 0.774 & 0.775 & 0.130 \\ & PHO & TPAMI 2019 & Unsupervised & 0.573 & 0.362 & 0.175 & 0.677 & 0.631 & 0.188 & 0.721 & 0.687 & 0.192 \\ & GMOAG & TAM 2020 & Unsupervised & 0.587 & 0.387 & 0.170 & 0.687 & 0.642 & 0.180 & 0.734 & 0.698 & 0.187 \\ \hline \multirow{3}{*}{**Oma**} & Oma (SD) & 2023 & Zero Shot & 0.683 & 0.532 & 0.121 & 0.717 & 0.669 & 0.127 & 0.776 & 0.787 & 0.110 \\ & Ours & & 0.667 & 0.549 & 0.115 & 0.723 & 0.691 & 0.117 & 0.785 & 0.799 & 0.101 \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison with SOTA on CoSOD datasets. The red font means that our proposed network achieves the competitive performance compared to these methods.
\begin{table}
\begin{tabular}{c|c|c c c c c c|c c c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Pub. \&Year**} & \multicolumn{3}{c|}{**CoCA**} & \multicolumn{3}{c}{**CoSODk**} & \multicolumn{3}{c}{**CoCoal2015**} \\ \cline{4-13} & & & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{m}^{\text{max}}\uparrow\) & \(MAE\downarrow\) \\ \hline TCSSOOD & TP 2023 & 0.724 & 0.613 & 0.0825 & 0.099 & 0.893 & 0.834 & 0.816 & 0.810 & 0.012 & 0.062 & 0.085 & 0.095 & 0.097 & 0.090 & 0.095 & 0.093 \\ TCNet & TCSVT 2023 & 0.728 (**0.077**) & 0.548 (**0.051**) & 0.101 & 0.092 & 0.873 & 0.737 & 0.843 & 0.797 & 0.841 & 0.088 & 0.077 & 0.859 & 0.077 & 0.054 & 0.094 \\ GCoNet & TPAMI 2023 & 0.758 (**0.749**) & 0.612 (**0.264**) & 0.038 (**0.077**) & 0.843 (**0.855**) & 0.831 & 0.827 & 0.062 & **0.054** & 0.881 & **0.081** & 0.810 & 0.820 & 0.084 \\ \hline Average Improvement & 41.469k & 42.059k & 46.659k & 1.418 & +1.869k & +1.311k & +2.699k & +1.69k & +15.939k \\ \hline \end{tabular}
\end{table}
Table 2: Performance improvement of existing methods after adding group information extracted by DINO and SD.
Figure 4: Visual comparison between our method and other methods. |
2309.04787 | An Analytic Method to Determine the Optimal Time for the Induction Phase
of Anesthesia | We obtain an analytical solution for the time-optimal control problem in the
induction phase of anesthesia. Our solution is shown to align numerically with
the results obtained from the conventional shooting method. The induction phase
of anesthesia relies on a pharmacokinetic/pharmacodynamic (PK/PD) model
proposed by Bailey and Haddad in 2005 to regulate the infusion of propofol. In
order to evaluate our approach and compare it with existing results in the
literature, we examine a minimum-time problem for anesthetizing a patient. By
applying the Pontryagin minimum principle, we introduce the shooting method as
a means to solve the problem at hand. Additionally, we conducted numerical
simulations using the MATLAB computing environment. We solve the time-optimal
control problem using our newly proposed analytical method and discover that
the optimal continuous infusion rate of the anesthetic and the minimum required
time for transition from the awake state to an anesthetized state exhibit
similarity between the two methods. However, the advantage of our new analytic
method lies in its independence from unknown initial conditions for the adjoint
variables. | Mohamed A. Zaitri, Cristiana J. Silva, Delfim F. M. Torres | 2023-09-09T13:20:20Z | http://arxiv.org/abs/2309.04787v1 | # An Analytic Method to Determine the Optimal Time for the Induction Phase of Anesthesia
###### Abstract
We obtain an analytical solution for the time-optimal control problem in the induction phase of anesthesia. Our solution is shown to align numerically with the results obtained from the conventional shooting method. The induction phase of anesthesia relies on a pharmacokinetic/pharmacodynamic (PK/PD) model proposed by Bailey and Haddad in 2005 to regulate the infusion of propofol. In order to evaluate our approach and compare it with existing results in the literature, we examine a minimum-time problem for anesthetizing a patient. By applying the Pontryagin minimum principle, we introduce the shooting method as a means to solve the problem at hand. Additionally, we conducted numerical simulations using the MATLAB computing environment. We solve the time-optimal control problem using our newly proposed analytical method and discover that the optimal continuous infusion rate of the anesthetic and the minimum required time for transition from the awake state to an anesthetized state exhibit similarity between the two methods. However, the advantage of our new analytic method lies in its independence from unknown initial conditions for the adjoint variables.
Keywords:pharmacokinetic/pharmacodynamic model; optimal control theory; time-optimal control of the induction phase of anesthesia; shooting method; analytical method; numerical simulations : 49M05; 49N90; 92C45 +
Footnote †: journal: Article
## 1 Introduction
Based on Guedel's classification, the first stage of anesthesia is the induction phase, which begins with the initial administration of anesthesia and ends with loss of consciousness [1]. Millions of people safely receive several types of anesthesia while undergoing medical procedures: local anesthesia, regional anesthesia, general anesthesia, and sedation [2]. However, there may be some potential complications of anesthesia including anesthetic awareness, collapsed lung, malignant hyperthermia, nerve damage, and postoperative delirium. Certain factors make it riskier to receive anesthesia, including advanced age, diabetes, kidney disease, heart disease, high blood pressure, and smoking [3]. To avoid the risk, administering anesthesia should be carried out on a scientific basis, based on modern pharmacotherapy, which relies on both pharmacokinetic (PK) and pharmacodynamic (PD) information [4]. Pharmacokinetics is used to describe the absorption and distribution of anesthesia in body fluids, resulting from the administration of a certain anesthesia dose. Pharmacodynamics is the study of the effect resulting from anesthesia [5]. Multiple mathematical models were already presented to predict the dynamics of the pharmacokinetics/pharmacodynamics (PK/PD) models [6; 7; 8; 9]. Some of these models were implemented following different methods [2; 10; 11].
The parameters of PK/PD models were fitted by Schnider et al. in [12]. In [6], the authors study pharmacokinetic models for propofol, comparing Schnider et al. and Marsh
et al. models [13]. The authors of [6] conclude that Schnider's model should always be used in effect-site targeting mode, in which larger initial doses are administered but smaller than those obtained from Marsh's model. However, users of the Schnider model should be aware that in the morbidly obese, the lean body mass (LBM) equation can generate paradoxical values, resulting in excessive increases in maintenance infusion rates [12]. In [14], a new strategy is presented to develop a robust control of anesthesia for the maintenance phase, taking into account the saturation of the actuator. The authors of [15] address the problem of optimal control of the induction phase. For other related works, see [8; 16] and references therein.
Here, we consider the problem proposed in [15], to transfer a patient from a state consciousness to unconsciousness. We apply the shooting method [17] using the Pontryagin minimum principle [18], correcting some inconsistencies found in [15] related with the stop criteria of the algorithm and the numerical computation of the equilibrium point. Secondly, we provide a new different analytical method to the time-optimal control problem for the induction phase of anesthesia. While the shooting method, popularized by Zabi et al. [15], is widely employed for solving such control problems and determining the minimum time, its reliance on Newton's method makes it sensitive to initial conditions. The shooting method's convergence is heavily dependent on the careful selection of initial values, particularly for the adjoint vectors. To overcome this limitation, we propose an alternative approach, which eliminates the need for initial value selection and convergence analysis. Our method offers a solution to the time-optimal control problem for the induction phase of anesthesia, free from the drawbacks associated with the shooting method. Furthermore, we propose that our method can be extended to other PK/PD models to determine optimal timings for drug administration. To compare the methods, we perform numerical simulations to compute the minimum time to anesthetize a man of 53 years, 77 kg, and 177 cm, as considered in [15]. We find the optimal continuous infusion rate of the anesthetic and the minimum time that needs to be chosen for treatment, showing that both the shooting method of [15] and the one proposed here coincide.
This paper is organized as follows. In Section 2, we recall the pharmacokinetic and pharmacodynamic model of Bailey and Haddad [19], the Schnider model [12], the bispectral index (BIS), and the equilibrium point [14]. Then, in Section 3, a time-optimal control problem for the induction phase of anesthesia is posed and solved both by the shooting and analytical methods. Finally, in Section 4, we compute the parameters of the model using the Schnider model [12], and we illustrate the results of the time-optimal control problem through numerical simulations. We conclude that the optimal continuous infusion rate for anesthesia and the minimum time that should be chosen for this treatment can be found by both shooting and analytical methods. The advantage of the new method proposed here is that it does not depend on the concrete initial conditions, while the shooting method is very sensitive to the choice of the initial conditions of the state and adjoint variables. We end with Section 5 of conclusions, pointing also some directions for future research.
## 2 The PK/PD Model
The pharmacokinetic/pharmacodynamic (PK/PD) model consists of four compartments: intravascular blood \((x_{1}(t))\), muscle \((x_{2}(t))\), fat \((x_{3}(t))\), and effect site \((x_{4}(t))\). The effect site compartment (brain) is introduced to account for the finite equilibration time between central compartment and central nervous system concentrations [19]. This model is used to describe the circulation of drugs in a patient's body, being expressed by a four-dimensional dynamical system as follows:
\[\left\{\begin{array}{l}\dot{x}_{1}(t)=-(a_{10}+a_{12}+a_{13})\,x_{1}(t)+a_{2 1}\,x_{2}(t)+a_{31}\,x_{3}(t)+u(t),\\ \dot{x}_{2}(t)=a_{12}\,x_{1}(t)-a_{21}\,x_{2}(t),\\ \dot{x}_{3}(t)=a_{13}\,x_{1}(t)-a_{31}\,x_{3}(t),\\ \dot{x}_{4}(t)=\frac{a_{40}}{v_{1}}x_{1}(t)-a_{60}\,x_{4}(t).\end{array}\right. \tag{1}\]
The state variables for system (1) are subject to the following initial conditions:
\[x(0)=(x_{1}(0),x_{2}(0),x_{3}(0),x_{4}(0))=(0,0,0,0), \tag{2}\]
where \(x_{1}(t),x_{2}(t),x_{3}(t)\), and \(x_{4}(t)\) represent, respectively, the masses of the propofol in the compartments of blood, muscle, fat, and effect site at time \(t\). The control \(u(t)\) is the continuous infusion rate of the anesthetic. The parameters \(a_{1\,0}\) and \(a_{e\,0}\) represent, respectively, the rate of clearance from the central compartment and the effect site. The parameters \(a_{12}\), \(a_{13}\), \(a_{21}\), \(a_{31}\), and \(a_{e\,0}/v_{1}\) are the transfer rates of the drug between compartments. A schematic diagram of the dynamical control system (1) is given in Figure 1.
### Schnider's Model
Following Schnider et al. [12], the lean body mass (LBM) is calculated using the James formula, which performs satisfactorily in normal and moderately obese patients, but not so well for severely obese cases [20]. The James formula calculates LBM as follows:
\[\text{for Male, LBM} = 1.1\times\text{weight}-128\times\left(\frac{\text{weight}}{\text {height}}\right)^{2}, \tag{3}\] \[\text{for Female, LBM} = 1.07\times\text{weight}-148\times\left(\frac{\text{weight}}{ \text{height}}\right)^{2}. \tag{4}\]
The parameters of the PK/PD model (1) are then estimated according to Table 1.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Parameter** & **Estimation** \\ \hline \(a_{10}\left(\text{min}^{-1}\right)\) & \(0.443+0.0107\left(\text{weight}-77\right)-0.0159\left(\text{LBM}-59\right)+0.0 062\left(\text{height}-177\right)\) \\ \hline \(a_{12}\left(\text{min}^{-1}\right)\) & \(0.302-0.0056\left(\text{age}-53\right)\) \\ \hline \(a_{13}\left(\text{min}^{-1}\right)\) & \(0.196\) \\ \hline \(a_{21}\left(\text{min}^{-1}\right)\) & \(\left(1.29-0.024\left(\text{age}-53\right)\right)/\left(18.9-0.391\left( \text{age}-53\right)\right)\) \\ \hline \(a_{31}\left(\text{min}^{-1}\right)\) & \(0.0035\) \\ \hline \(a_{e\,0}\left(\text{min}^{-1}\right)\) & \(0.456\) \\ \hline \(v_{1}\left(\text{L}\right)\) & \(4.27\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameter values for model (1) according to Schnider’s model [12].
Figure 1: Schematic diagram of the PK/PD model with the effect site compartment of Bailey and Haddad [19].
### The Bispectral Index (BIS)
The BIS is the depth of anesthesia indicator, which is a signal derived from the EEG analysis and directly related to the effect site concentration of \(x_{4}(t)\). It quantifies the level of consciousness of a patient from 0 (no cerebral activity) to 100 (fully awake patient), and can be described empirically by a decreasing sigmoid function [19]:
\[BIS(x_{4}(t))=BIS_{0}\Bigg{(}1-\frac{x_{4}(t)\gamma}{x_{4}(t)^{\gamma}+EC_{50}^ {\gamma}}\Bigg{)}, \tag{5}\]
where \(BIS_{0}\) is the \(BIS\) value of an awake patient typically set to 100, \(EC_{50}\) corresponds to the drug concentration associated with 50% of the maximum effect, and \(\gamma\) is a parameter modeling the degree of nonlinearity. According to [21], typical values for these parameters are \(EC_{50}=3.4\,\mathrm{mg/L}\) and \(\gamma=3\).
### The Equilibrium Point
Following [14], the equilibrium point is obtained by equating the right-hand side of (1) to zero,
\[\left\{\begin{array}{l}0=-(a_{10}+a_{12}+a_{13})\,x_{1}+a_{21}\,x_{2}+a_{31 }\,x_{3}+u,\\ 0=a_{12}\,x_{1}-a_{21}\,x_{2},\\ 0=a_{13}\,x_{1}-a_{31}\,x_{3},\\ 0=\frac{a_{50}}{v_{1}}\,x_{1}-a_{e0}\,x_{4},\end{array}\right. \tag{6}\]
with the condition
\[x_{4}=EC_{50}. \tag{7}\]
It results that the equilibrium point \(x_{e}=(x_{e1},x_{e2},x_{e3},x_{e4})\) is given by
\[x_{e1}=v_{1}\,EC_{50},\quad x_{e2}=\frac{a_{12}\,v_{1}\,EC_{50}}{a_{21}},\quad x _{e3}=\frac{a_{13}\,v_{1}\,EC_{50}}{a_{31}},\quad x_{e4}=EC_{50}, \tag{8}\]
and the value of the continuous infusion rate for this equilibrium is
\[u_{e}=a_{10}\,v_{1}\,EC_{50}. \tag{9}\]
The fast state is defined by
\[x_{eF}(t)=(x_{1}(t),x_{4}(t)). \tag{10}\]
The control of the fast dynamics is crucial because the BIS is a direct function of the concentration at the effect site.
## 3 Time-Optimal Control Problem
Let \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t))\in\mathbb{R}^{4}\). We can write the dynamical system (1) in a matrix form as follows:
\[\dot{x}(t)=A\,x(t)+B\,u(t), \tag{11}\]
where
\[A=\left(\begin{array}{cccc}-(a_{10}+a_{12}+a_{13})&a_{21}&a_{31}&0\\ a_{12}&-a_{21}&0&0\\ a_{13}&0&-a_{31}&0\\ \frac{a_{e0}}{v_{1}}&0&0&-a_{e0}\end{array}\right)\quad\text{and}\quad B= \left(\begin{array}{c}1\\ 0\\ 0\\ 0\end{array}\right). \tag{12}\]
Here, the continuous infusion rate \(u(t)\) is to be chosen so as to transfer the system (1) from the initial state (wake state) to the fast final state (anesthetized state) in the shortest possible time. Mathematically, we have the following time-optimal control problem [15]:
\[\begin{cases}\min\limits_{l_{f}}J=\int\limits_{0}^{t_{f}}dt,\\ \dot{x}(t)=A\,x(t)+B\,u(t),\quad x(0)=(0,0,0,0),\\ C\,x_{eF}(t_{f})=x_{eF},\\ 0\leq u(t)\leq U_{max},\quad t\in[0,t_{f}],\quad\,t_{f}\,\text{is free},\end{cases} \tag{13}\]
where \(t_{f}\) is the first instant of time that the desired state is reached, and \(C\) and \(x_{eF}\) are given by
\[C=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\,\,\,\,x_{eF}=(x_{e1},\,x_{e4}), \tag{14}\]
with
\[x_{eF}(t_{f})=(x_{1}(t_{f}),x_{2}(t_{f})). \tag{15}\]
### Pontryagin Minimum Principle
According to the Pontryagin minimum principle (PMP) [18], if \(\tilde{u}\in L^{1}\) is optimal for Problem (13) and the final time \(t_{f}\) is free, then there exists \(\psi(t)=(\psi_{1}(t),\ldots,\psi_{4}(t))\), \(t\in[0,t_{f}]\), \(\psi\in AC([0,t_{f}];\mathbb{R}^{4})\), called the adjoint vector, such that
\[\begin{cases}\dot{x}=\frac{\partial H}{\partial\psi},\\ \dot{\psi}=-\frac{\partial H}{\partial x},\end{cases} \tag{16}\]
where the Hamiltonian \(H\) is defined by
\[H(t,x,u,\psi)=1+\psi^{T}\,(A\,x+B\,u). \tag{17}\]
Moreover, the minimality condition
\[H(t,\tilde{x}(t),\tilde{u}(t),\tilde{\psi}(t))=\min\limits_{0\leq u\leq U_{ max}}H(t,\tilde{x}(t),u,\tilde{\psi}(t)) \tag{18}\]
holds almost everywhere on \(t\in[0,t_{f}]\).
Since the final time \(t_{f}\) is free, according to the transversality condition of PMP, we obtain:
\[H(t_{f},x(t_{f}),u(t_{f}),\psi(t_{f}))=0. \tag{19}\]
Solving the minimality condition (18) on the interior of the set of admissible controls gives the necessary condition
\[\tilde{u}(t)=\begin{cases}0&\text{if }\tilde{\psi}_{1}(t)>0,\\ U_{max}&\text{if }\tilde{\psi}_{1}(t)<0,\end{cases} \tag{20}\]
where \(\tilde{\psi}_{1}(t)\) is obtained from the adjoint system (16), that is, \(\tilde{\psi}^{\prime}(t)=-A^{T}\tilde{\psi}(t)\), and the transversality condition (19). This is discussed in Sections 3.2 and 3.3.
### Shooting Method
The shooting method is a numerical technique used to solve boundary value problems, specifically in the realm of differential equations and optimal control. It transforms the problem into an initial value problem by estimating the unknown boundary conditions. Through iterative adjustments to these estimates, the boundary conditions are gradually
satisfied. In [17], the authors propose an algorithm that addresses numerical solutions for parameterized optimal control problems. This algorithm incorporates multiple shooting and recursive quadratic programming, introducing a condensing algorithm for linearly constrained quadratic subproblems and high-rank update procedures. The algorithm's implementation leads to significant improvements in convergence behavior, computing time, and storage requirements. For more on numerical approaches to solve optimal control problems, we refer the reader to [22] and references therein.
Using (16), (17), (19), and (20), we consider the following problem:
\[\begin{cases}\dot{x}(t)=A\,x(t)+B\,\times\max\,(0,-U_{max}\,sign(\psi_{1}(t))),\\ \dot{\psi}(t)=-A^{T}\,\psi(t),\\ x(0)=(0,0,0,0),\,x_{1}(t_{f})=x_{e1},\,x_{4}(t_{f})=x_{e4},\\ \psi(0)\text{ is free, }H(t_{f},x(t_{f}),\max\,(0,-U_{max}\,sign(\psi_{1}(t_{f}))),\psi(t_{f}))=0.\end{cases} \tag{21}\]
Let \(z(t)=(x(t),\psi(t))\). Then, we obtain the following two points' boundary value problem:
\[\begin{cases}\dot{z}(t)=A^{*}z(t)+B^{*},\\ R(z(0),z(t_{f}))=0,\end{cases} \tag{22}\]
where \(A^{*}\in M_{8\times 8}(\mathbb{R})\) is the matrix given by
\[A^{*}=\left(\begin{array}{cc}A&0_{4\times 4}\\ 0_{4\times 4}&-A^{T}\end{array}\right), \tag{23}\]
\(B^{*}\in\mathbb{R}^{8}\) is the vector given by
\[B^{*}=\begin{cases}(0,\,0,\,0,\,0,\,0,\,0,\,0)&\text{if }\psi_{1}(t)>0,\\ (U_{max},\,0,\,0,\,0,\,0,\,0,\,0)&\text{if }\psi_{1}(t)<0,\end{cases} \tag{24}\]
and \(R(z(0),z(t_{f}))\) is given by (2), (15), and (19). We consider the following Cauchy problem:
\[\begin{cases}\dot{z}(t)=A^{*}z(t)+B^{*},\\ z(0)=z_{0}.\end{cases} \tag{25}\]
If we define the shooting function \(S:\,\mathbb{R}^{4}\longrightarrow\mathbb{R}^{3}\) by
\[S(z_{0})=R(t_{f},z(t_{f},z_{0})), \tag{26}\]
where \(z(t,z_{0})\) is the solution of the Cauchy problem (25), then the two points' boundary value problem (21) is equivalent to
\[S(z_{0})=0. \tag{27}\]
To solve (27), we use Newton's method [23].
### Analytical Method
We now propose a different method to choose the optimal control. If the pair \((A,B)\) satisfies the Kalman condition and all eigenvalues of matrix \(A\in n\times n\) are real, then any extremal control has at most \(n-1\) commutations on \(\mathbb{R}^{+}\) (at most \(n-1\) switching times). We consider the following eight possible strategies:
Strategy 1 (zero switching times):
\[u(t)=U_{max},\,\forall t\in[0,t_{f}]. \tag{28}\]
Strategy 2 (zero switching times):
\[u(t)=0,\,\forall t\in[0,t_{f}]. \tag{29}\]
Strategy 3 (one switching time):
\[u(t)=\begin{cases}U_{max}&\text{if }0\leq t<t_{c},\\ 0&\text{if }t_{c}<t\leq t_{f},\end{cases} \tag{30}\]
where \(t_{c}\) is a switching time.
Strategy 4 (one switching time):
\[u(t)=\begin{cases}0&\text{if }0\leq t<t_{c},\\ U_{max}&\text{if }t_{c}<t\leq t_{f}.\end{cases} \tag{31}\]
Strategy 5 (two switching times):
\[u(t)=\begin{cases}U_{max}&\text{if }0<t<t_{c1},\\ 0&\text{if }t_{c1}<t<t_{c2}.\\ U_{max}&\text{if }t_{c2}<t\leq t_{f},\end{cases} \tag{32}\]
where \(t_{c1}\) and \(t_{c2}\) represent two switching times.
Strategy 6 (two switching times):
\[u(t)=\begin{cases}0&\text{if }0<t<t_{c1},\\ U_{max}&\text{if }t_{c1}<t<t_{c2}.\\ 0&\text{if }t_{c2}<t\leq t_{f}.\end{cases} \tag{33}\]
Strategy 7 (three switching times):
\[u(t)=\begin{cases}U_{max}&\text{if }0<t<t_{c1},\\ 0&\text{if }t_{c1}<t<t_{c2}.\\ U_{max}&\text{if }t_{c2}<t\leq t_{c3}.\\ 0&\text{if }t_{c3}<t<t_{f},\end{cases} \tag{34}\]
where \(t_{c1}\), \(t_{c2}\), and \(t_{c3}\) represent three switching times.
Strategy 8 (three switching times):
\[u(t)=\begin{cases}0&\text{if }0<t<t_{c1},\\ U_{max}&\text{if }t_{c1}<t<t_{c2}.\\ 0&\text{if }t_{c2}<t\leq t_{c3}.\\ U_{max}&\text{if }t_{c3}<t<t_{f}.\end{cases} \tag{35}\]
Let \(x(t)\) be the trajectory associated with the control \(u(t)\), given by the relation
\[x(t)=\exp(A\,t)\,x(0)+\int\limits_{0}^{t}\exp(A(t-s))Bu(t)ds, \tag{36}\]
where \(\exp(A)\) is the exponential matrix of \(A\).
To calculate the switching times \(t_{c}\), \(t_{c1}\), \(t_{c2}\) and the final time \(t_{f}\), we have to solve the following nonlinear equation:
\[\tilde{x}_{eF}(t_{f})=(x_{e1},\,x_{e4}). \tag{37}\]
We also solve (37) using the Newton method [23].
## 4 Numerical Example
In this section, we use the shooting and analytical methods to calculate the minimum time \(t_{f}\) to anesttize a man of 53 years, 77 kg, and 177 cm.
The equilibrium point and the flow rate corresponding to a BIS of 50 are:
\[x_{e}=(14.518\,\mathrm{mg},\,64.2371\,\mathrm{mg},\,813.008\,\mathrm{mg},\,3.4 \,\mathrm{mg}),\,\,\,u_{e}=6.0907\,\mathrm{mg}/\mathrm{min}. \tag{38}\]
Following the Schnider model, the matrix \(A\) of the dynamic system (11) is given by:
\[A=\left(\begin{array}{cccc}-0.9175&0.0683&0.0035&0\\ 0.3020&-0.0683&0&0\\ 0.1960&0&-0.0035&0\\ 0.1068&0&0&-0.4560\\ \end{array}\right)\quad\text{and}\quad B=\left(\begin{array}{c}1\\ 0\\ 0\\ 0\\ \end{array}\right). \tag{39}\]
We are interested to solve the following minimum-time control problem:
\[\begin{cases}\min\limits_{t_{f}}J=\int\limits_{0}^{t_{f}}dt,\\ \dot{x}(t)=A\,x(t)+B\,u(t),\quad x(0)=(0,\,0,\,0,\,0),\\ x_{e1}(t_{f})=14.518\,\mathrm{mg},\quad x_{e4}(t_{f})=3.4\,\mathrm{mg},\\ 0\leq u(t)\leq 106.0907,\quad t\in[0,t_{f}],\quad t_{f}\,\text{is free}.\end{cases} \tag{40}\]
### Numerical Resolution by the Shooting Method
Let \(z(t)=(x(t),\psi(t))\). We consider the following Cauchy problem:
\[\begin{cases}\dot{z}(t)=A^{*}z(t)+B^{*},\\ z(0)=z_{0}=(0,\,0,\,0,\,0,\,\psi_{01},\,\psi_{02},\,\psi_{03},\,\psi_{04}), \end{cases} \tag{41}\]
where
\[A^{*}=10^{-4}\left(\begin{array}{cccccccc}-9175&683&35&0&0&0&0&0\\ 3020&-683&0&0&0&0&0&0\\ 196&0&-35&0&0&0&0&0\\ 1068&0&0&-456&0&0&0&0\\ 0&0&0&0&9175&-3020&-196&-1068\\ 0&0&0&0&-683&683&0&0\\ 0&0&0&0&-35&0&35&0\\ 0&0&0&0&0&0&456\\ \end{array}\right), \tag{42}\]
\[B^{*}=\left(\begin{array}{cccc}\max\left(0,-106.0907\,sign(\psi_{1}(t)) \right)\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0 \\ 0 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 \\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0 \\ 0 0\\ 0\\ 0 \\ 0\\ 0 \\ 0 \\ 0 \\ 0 \\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 0\\ 0 \\ 0 0\\ 0 \\ 0 0\\ 0 \\ 0 0\\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 0 \\ 0 0 \\ 0 0\\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0\\ 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0\\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0
The shooting function \(S\) is given by
\[S(z_{0})=(S_{1}(z_{0}),\,S_{2}(z_{0}),\,S_{3}(z_{0})), \tag{44}\]
where
\[S_{1}(z_{0}) = x_{e1}(t_{f})-14.518,\] \[S_{2}(z_{0}) = x_{e4}(t_{f})-3.4,\] \[S_{3}(z_{0}) = 1+\psi^{T}(t_{f})\Big{(}Ax(t_{f})+B\max{(0,-106.0907\,sing\,\psi_{ 1}(t_{f}))}\Big{)}.\]
All computations were performed with the MATLAB numeric computing environment, version R2020b, using the medium-order method and the function ode45 (Runge-Kutta method) in order to solve the nonstiff differential system (22). We have used the variable order method and the function ode113 (Adams-Bashforth-Moulton method) in order to solve the nonstiff differential system (25), and the function fsolve in order to solve equation \(S(z_{0})=0\). Thus, we obtain that the minimum time is equal to
\[t_{f}=1.8397\min, \tag{45}\]
with
\[\psi^{T}(0)=(-0.0076,\,0.0031,\,-0.0393,\,-0.0374). \tag{46}\]
### Numerical Resolution by the Analytical Method
The pair \((A,B)\) satisfies the Kalman condition, and the matrix \(A\) has four real eigenvalues. Then, the extremal control \(u(t)\) has at most three commutations on \(\mathbb{R}^{+}\). Therefore, let us test the eight strategies provided in Section 3.3.
Note that the anesthesiologist begins with a bolus injection to transfer the patient state from the consciousness state \(x(0)\) to the unconsciousness state
\[x_{eF}=(14.518,\,3.4),\]
that is,
\[u(0)=U_{max}=106.0907\,\mathrm{mg/min}. \tag{47}\]
Thus, Strategies 2, 4, 6, and 8 are not feasible here. Therefore, in the sequel, we investigate Strategies 1, 3, 5, and 7 only.
Strategy 1: Let \(u(t)=106.0907\,\mathrm{mg/min}\) for all \(t\in[0,t_{f}]\). The trajectory \(x(t)\), associated with this control \(u(t)\), is given by the following relation:
\[x(t)=\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds,\,\,\forall t\in[0,t_{f}], \tag{48}\]
where
\[\exp(A\,(t-s)) = V\,D(t-s)\,V^{-1} \tag{49}\]
with
\[V=\left(\begin{array}{cccc}0&0.9085&0.0720&-0.0058\\ 0&-0.3141&0.9377&-0.0266\\ 0&-0.1898&-0.3395&-0.9996\\ 1&-0.1997&0.0187&-0.0014\end{array}\right) \tag{50}\]
and
\[D(\tau)=\left(\begin{array}{cccc}\exp^{-0.4560\,\tau}&0&0&0\\ 0&\exp^{-0.9419\,\tau}&0&0\\ 0&0&\exp^{-0.0451\,\tau}&0\\ 0&0&0&\exp^{-0.0024\,\tau}\end{array}\right). \tag{51}\]
System (37) takes the form
\[\left\{\begin{aligned} & x_{1}(t_{f})=14.518,\\ & x_{4}(t_{f})=3.4,\end{aligned}\right. \tag{52}\]
and has no solutions. Thus, Strategy 1 is not feasible.
Strategy 3: Let \(u(t)\), \(t\in[0,t_{f}]\), be the control defined by
\[u(t)=\left\{\begin{aligned} & 106.0907\,\text{mg}/\text{min}&\text{if }0\leq t<t_{c},\\ & 0&\text{if }t_{c}<t\leq t_{f}.\end{aligned}\right. \tag{53}\]
The trajectory \(x(t)\) associated with this control \(u(t)\) is given by
\[x(t)=\left\{\begin{aligned} &\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds& \text{if }0\leq t\leq t_{c},\\ &\exp(A\left(t-t_{c}\right))\,x(t_{c})&\text{if }t_{c}<t\leq t_{f}, \end{aligned}\right. \tag{54}\]
where
\[\exp(A\left(t-t_{c}\right)) = V\,D(t-t_{c})\,V^{-1}. \tag{55}\]
To calculate the switching time \(t_{c}\) and the final time \(t_{f}\), we have to solve the nonlinear system (52) with the new condition
\[t_{c}<t_{f}. \tag{56}\]
Similarly to Section 4.1, all numerical computations were performed with MATLAB R2020b using the command solve to solve Equation (52). The obtained minimum time is equal to
\[t_{f}=1.8397\,\text{min}, \tag{57}\]
with the switching time
\[t_{c}=0.5467\,\text{min}. \tag{58}\]
Strategy 5: Let \(u(t)\), \(t\in[0,t_{f}]\), be the control defined by the relation
\[u(t)=\left\{\begin{aligned} & 106.0907\,\text{mg}/\text{min}& \text{if }0\leq t<t_{c1},\\ & 0&\text{if }t_{c1}<t<t_{c2}.\\ & 106.0907\,\text{mg}/\text{min}&\text{if }t_{c2}<t\leq t_{f}, \end{aligned}\right. \tag{59}\]
where \(t_{c1}\) and \(t_{c2}\) are the two switching times. The trajectory \(x(t)\) associated with control (59) is given by
\[x(t)=\left\{\begin{aligned} &\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds& \text{if }0\leq t\leq t_{c1},\\ &\exp(A\left(t-t_{c1}\right))\,x(t_{c1})&\text{if }t_{c1}<t\leq t_{c2},\\ &\exp(A\left(t-t_{c2}\right))\,x(t_{c2})+\int\limits_{t_{c2}}^{t} \exp(A(t-s))BU_{max}ds&\text{if }t_{c2}<t\leq t_{f}.\end{aligned}\right. \tag{60}\]
To compute the two switching times \(t_{c1}\) and \(t_{c2}\) and the final time \(t_{f}\), we have to solve the nonlinear system (52) with
\[0\leq t_{c1}\leq t_{c2}\leq t_{f}. \tag{61}\]
It turns out that System (52) subject to Condition (61) has no solution. Thus, Strategy 5 is also not feasible.
Strategy 7: Let \(u(t)\), \(t\in[0,t_{f}]\), be the control defined by the relation
\[u(t)=\begin{cases}106.0907\,\mathrm{mg/min}&\text{if }0\leq t<t_{c1},\\ 0&\text{if }t_{c1}<t<t_{c2}.\\ 106.0907\,\mathrm{mg/min}&\text{if }t_{c2}<t\leq t_{c3},\\ 0\,\mathrm{mg/min}&\text{if }t_{c3}<t\leq t_{f},\end{cases} \tag{62}\]
where \(t_{c1}\), \(t_{c2}\), and \(t_{c3}\) are the three switching times. The trajectory \(x(t)\) associated with Control (62) is given by
\[x(t)=\begin{cases}\int\limits_{0}^{t}\exp(A(t-s))BU_{max}ds&\text{if }0\leq t \leq t_{c1},\\ \exp(A\left(t-t_{c1}\right))\,x(t_{c1})&\text{if }t_{c1}<t\leq t_{c2},\\ \exp(A\left(t-t_{c2}\right))\,x(t_{c2})+\int\limits_{t_{c2}}^{t}\exp(A(t-s))BU_ {max}ds&\text{if }t_{c2}<t\leq t_{c3},\\ \exp(A\left(t-t_{c3}\right))\,x(t_{c3})&\text{if }t_{c3}<t\leq t_{f}.\end{cases} \tag{63}\]
To compute the three switching times \(t_{c1}\), \(t_{c2}\), and \(t_{c3}\) and the final time \(t_{f}\), we have to solve the nonlinear system (52) with
\[0\leq t_{c1}\leq t_{c2}\leq t_{c3}\leq t_{f}. \tag{64}\]
It turns out that System (52) subject to Condition (64) has no solution. Thus, Strategy 7 is also not feasible.
In Figures 2 and 3, we present the solutions of the linear system of differential equations (40) under the optimal control \(u(t)\) illustrated in Figure 4, where the black curve corresponds to the one obtained by the shooting method, as explained in Section 3.2, while the blue curve corresponds to our analytical method, in the sense of Section 3.3. In addition, for both figures, we show the controlled BIS Index, the trajectory of fast states corresponding to the optimal continuous infusion rate of the anesthetic \(u(t)\), and the minimum time \(t_{f}\) required to transition System (40) from the initial (wake) state
\[x_{0}=(0,\,0,\,0,\,0)\]
to the fast final (anesthetized) state
\[x_{eF}=(14.518,\,3.4)\]
in the shortest possible time. The minimum time \(t_{f}\) is equal to \(t_{f}=1.8397\,\mathrm{min}\) by the shooting method (black curve in Figure 2), and it is equal to \(t_{f}=1.8397\,\mathrm{min}\) by the analytical method (blue curve in Figure 3).
By using the shooting method, the black curve in Figure 4 shows that the optimal continuous infusion rate of the induction phase of anesthesia \(u(t)\) is equal to \(106.0907\,\mathrm{mg/min}\) until the switching time
\[t_{c}=0.5467\,\mathrm{min}.\]
Then, it is equal to \(0\,\mathrm{mg/min}\) (stop-infusion) until the final time
\[t_{f}=1.8397\,\mathrm{min},\]
Figure 4: The optimal continuous infusion rate \(u(t)\) of the induction phase of anesthesia, as obtained by the shooting and analytical methods.
Figure 3: The state trajectory, controlled BIS index, and trajectory of the fast states corresponding to the optimal control \(u(t)\) of Figure 4, using the analytical method.
Figure 2: The state trajectory, controlled BIS index, and trajectory of the fast states corresponding to the optimal control \(u(t)\) of Figure 4, using the shooting method.
By using the analytical method, the blue curve in Figure 4 shows that the optimal continuous infusion rate of the induction phase of anesthesia \(u(t)\) is equal to \(106.0907\,\mathrm{mg/min}\) until the switching time
\[t_{c}=0.5467\,\mathrm{min}.\]
Then, it is equal to \(0\,\mathrm{mg/min}\) (stop-infusion) until the final time
\[t_{f}=1.8397\,\mathrm{min}.\]
We conclude that both methods work well and give similar results. However, in general, the shooting method does not always converge, depending on the initial conditions (46). To obtain such initial values is not an easy task since no theory is available to find them. For this reason, the proposed analytical method is logical, practical, and more suitable for real applications.
## 5 Conclusions
The approach proposed by the theory of optimal control is very effective. The shooting method was proposed by Zabi et al. [15], which is used to solve the time-optimal control problem and calculate the minimum time. However, this approach is based on Newton's method. The convergence of Newton's method depends on the initial conditions, being necessary to select an appropriate initial value so that the function is differentiable and the derivative does not vanish. This implies that the convergence of the shooting method is attached to the choice of the initial values. Therefore, the difficulty of the shooting method is to find the initial conditions of the adjoint vectors. Here, the aim was to propose a different approach, which we call "the analytical method", that allows to solve the time-optimal control problem for the induction phase of anesthesia without such drawbacks. Our method is guided by the selection of the optimal strategy, without the need to choose initial values and study the convergence. We claim that our method can also be applied to other PK/PD models, in order to find the optimal time for the drug administration.
In the context of PK/PD modeling, the challenges associated with uncertainties in plant model parameters and controller gains for achieving robust stability and controller non-fragility are significant [24]. These challenges arise from factors like inter-individual variability, measurement errors, and the dynamic nature of patient characteristics and drug response. Further investigation is needed to understand and develop effective strategies to mitigate the impact of these uncertainties in anesthesia-related PK/PD models. This research can lead to the development of robust and non-fragile control techniques that enhance the stability and performance of anesthesia delivery systems. By addressing these challenges, we can improve the precision and safety of drug administration during anesthesia procedures, ultimately benefiting patient outcomes and healthcare practices. In this direction, the recent results of [25] may be useful. Moreover, we plan to investigate PK/PD fractional-order models, which is a subject under strong current research [26]. This is under investigation and will be addressed elsewhere.
**Author Contributions:** Conceptualization, M.A.Z., C.J.S., and D.F.M.T.; methodology, M.A.Z., C.J.S., and D.F.M.T.; software, M.A.Z.; validation, C.J.S. and D.F.M.T.; formal analysis, M.A.Z., C.J.S., and D.F.M.T.; investigation, M.A.Z., C.J.S., and D.F.M.T.; writing--original draft preparation, M.A.Z., C.J.S., and D.F.M.T.; writing--review and editing, M.A.Z., C.J.S. and D.F.M.T.; visualization, M.A.Z.; supervision, C.J.S. and D.F.M.T.; funding acquisition, M.A.Z., C.J.S., and D.F.M.T. All authors have read and agreed to the published version of the manuscript.
**Funding:** This research was funded by the Portuguese Foundation for Science and Technology (FCT--Fundacao para a Ciencia e a Tecnologia) through the R&D Unit CIDMA, Grant Numbers UIDB/04106/2020 and UIDP/04106/2020, and within the project "Mathematical Modelling of Multiscale Control Systems: Applications to Human Diseases" (CoSysM3), Reference 2022.03091.PTDC, financially supported by national funds (OE) through FCT/MCTES.
**Institutional Review Board Statement:** Not applicable.
**Informed Consent Statement:** Not applicable.
**Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is not applicable to this article. The numerical simulations of Section 4 were implemented in MATLAB R2022a. The computer code is available from the authors upon request.
**Acknowledgments:** The authors are grateful to four anonymous referees for their constructive remarks and questions that helped to improve the paper.
**Conflicts of Interest:** The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
2309.14594 | Learning Vision-Based Bipedal Locomotion for Challenging Terrain | Reinforcement learning (RL) for bipedal locomotion has recently demonstrated
robust gaits over moderate terrains using only proprioceptive sensing. However,
such blind controllers will fail in environments where robots must anticipate
and adapt to local terrain, which requires visual perception. In this paper, we
propose a fully-learned system that allows bipedal robots to react to local
terrain while maintaining commanded travel speed and direction. Our approach
first trains a controller in simulation using a heightmap expressed in the
robot's local frame. Next, data is collected in simulation to train a heightmap
predictor, whose input is the history of depth images and robot states. We
demonstrate that with appropriate domain randomization, this approach allows
for successful sim-to-real transfer with no explicit pose estimation and no
fine-tuning using real-world data. To the best of our knowledge, this is the
first example of sim-to-real learning for vision-based bipedal locomotion over
challenging terrains. | Helei Duan, Bikram Pandit, Mohitvishnu S. Gadde, Bart van Marum, Jeremy Dao, Chanho Kim, Alan Fern | 2023-09-26T00:59:59Z | http://arxiv.org/abs/2309.14594v2 | # Learning Vision-Based Bipedal Locomotion for Challenging Terrain
###### Abstract
Reinforcement learning (RL) for bipedal locomotion has recently demonstrated robust gaits over moderate terrains using only proprioceptive sensing. However, such blind controllers will fail in environments where robots must anticipate and adapt to local terrain, which requires visual perception. In this paper, we propose a fully-learned system that allows bipedal robots to react to local terrain while maintaining commanded travel speed and direction. Our approach first trains a controller in simulation using a heightmap expressed in the robot's local frame. Next, data is collected in simulation to train a heightmap predictor, whose input is the history of depth images and robot states. We demonstrate that with appropriate domain randomization, this approach allows for successful sim-to-real transfer with no explicit pose estimation and no fine-tuning using real-world data. To the best of our knowledge, this is the first example of sim-to-real learning for vision-based bipedal locomotion over challenging terrains.
## I Introduction
A robot's utility for useful work often hinges on its capacity to maneuver effectively across a spectrum of natural and structured terrains. For this purpose, bipedal robots have the potential to match human locomotion capabilities, but currently are far inferior. Approaching human performance requires a robot to perceive its surroundings, assess its states relative to the upcoming terrain, and dynamically adapt its gait. Robustly achieving such an integration of vision and locomotion remains an open problem for bipedal robots.
Modern control approaches for vision-based legged locomotion [1, 2, 3, 4, 5, 6, 7, 8] often decompose the problem into levels of control hierarchy, usually requiring robust whole-body control, predictive footstep planning, accurate odometry estimation, and terrain mapping. Each level requires a set of modeling assumptions in order to process high-dimensional and raw proprioceptive and visual exteroceptive information. On the other hand, recent learning-based approaches make fewer modeling assumptions, often learning a direct mapping from high-dimensional sensory inputs to actuator commands. These approaches have shown strong empirical demonstrations of blind bipedal locomotion [9] and vision-based quadrupedal locomotion [10, 11, 12, 13, 14, 15, 16] in real-world environments. However, they are typically trained in simulation and their success relies on designing techniques for reliable sim-to-real transfer, which can be particularly challenging when vision is one of the input modalities.
In this paper, we design and demonstrate a sim-to-real learning approach for vision-based bipedal locomotion over challenging terrain. To the best of our knowledge, this is the first such successful demonstration. A distinctive aspect of our approach is that it avoids the estimation of global odometry, which can be particularly challenging for legged locomotion due to estimation drifts from frequent contacts and aggressive motions. Instead, our approach learns to directly combine proprioceptive and visual information in the robot's local frame to make control decisions. In particular, our architecture is composed of two primary learned components: 1) a control policy whose input is proprioceptive information and a heightmap of a local region in front of the robot (Section IV), and 2) a heightmap predictor, which uses proprioceptive and egocentric depth images to predict a heightmap for the control policy (Section V). The key contribution of our work is the sim-to-real pipeline and the system integration for these components, which allows the overall locomotion controller to transfer successfully to the real world. In particular, we demonstrate the learned controller on a camera-equipped bipedal Cassie robot, which can traverse challenging terrains constructed in a lab environment shown in Figure 1.
Fig. 1: Our fully learned controller integrates vision and locomotion for reactive and agile gaits over terrains. The proposed approach enables bipedal robot Cassie traversing over challenging terrains, including random high blocks, stairs, 0.5m step up (\(\sim\)60% leg length), with speed up to 1m/s.
## II Related Work
**Sim-to-Real Reinforcement Learning (RL) for Bipedal Robots.** Recently, RL-based controllers for bipedal locomotion have mostly focused on blind locomotion, where proprioception is the primary control input [17, 18, 19, 20]. These works have produced controllers that can handle a limited amount of non-flat terrains. The learned policy [17] tends to have a high cost of transport due to aggressive gaits and high swing trajectories that are required to maintain balance without knowledge of the local terrain. When traversing over challenging terrains, gait adaptation is highly required, and visual information becomes critical at certain times during the gait [21, 22]. The research question then is how to incorporate vision into RL policies so that bipedal robots can react to terrains and adapt their own gaits at the same time.
**Learning Vision-based Legged Locomotion.** Previous work for quadrupedal robots has used RL methods to demonstrate successful sim-to-real transfer of vision-based locomotion controllers. One type of approach uses an elevation map or height scanners in the global reference frame as part of the RL policy input. For example, scattered elevation scans around each foot are used in quadrupeds [11] and bipeds [23], while other methods [13, 14, 15] use a uniformly structured elevation map around the robot, both of which require multiple sensors and careful calibration. In contrast, in this work we are interested in a solution that does not require careful calibration for global odometry. A second type of approach is to directly use vision inputs from a camera, such as depth images [10, 13, 24, 25] or RGB images [26], as the inputs to a RL policy. This end-to-end training is often carried out via teacher-student training [13, 25], which can exploit the teacher's access to privileged information in simulation. While these approaches have been successful on quadrupeds on hardware, it is unclear how well they can work for bipeds in the real world, where the contact locations become critical for stability and unintended contact forces with the ground can much more easily tip over the robot.
**Local Terrain Mapping for Legged Robots.** Model-based terrain mapping techniques have shown successful deployment onto hardware via odometry and fusion of multiple visual sensors [27, 28, 29]. These techniques strongly rely on pose estimation of the floating base in the global frame, where the map can drift due to inaccuracies from pose estimation. Previous visual-based quadrupedal locomotion work [11] reported that large amounts of domain randomization are required to overcome the noises and drifts from such mapping techniques. On the other hand, recent learning-based techniques [30, 31] have shown promising results when reconstructing the terrains from multiple cameras, but the use of robot global pose is still required. In this paper, our focus is on responding to terrain changes in front of the robot, for which we use a single depth camera to provide an egocentric view of the terrain. Along with robot states, the reconstructed heightmap is entirely in the robot's local frame. Our method removes the need to use global estimation when the robot has to react to local terrains rapidly.
Figure 2 illustrates our overall system, which has two main components: 1) a locomotion policy, which outputs PD setpoints for the robot actuators based on proprioception, a local terrain heightmap, and user commands, and 2) a heightmap predictor, which outputs a predicted heightmap based on proprioceptive information and images from a depth camera. These components are learned in simulation and then transferred to the real robot.
The training pipeline first uses RL to train the locomotion policy in simulation. This training process randomizes the terrain, user commands, and physical parameters to help with sim-to-real transfer. Next, using the learned policy, the pipeline collects data from a simulated depth camera attached to the robot, which is paired with ground-truth heightmap information to be used for supervised learning of the heightmap predictor. Training this predictor also involves domain randomization and added noise to facilitate sim-to-real transfer. Sections IV and V describe the architectural and training details of each component.
## IV Learning a Terrain-Aware Locomotion Policy
The main control objective is to follow speed and heading commands while maintaining balance over possibly challenging terrains. Below, we describe the observation space, action space, architecture of the policy, and training methods.
### _Control Policy Design_
**Observation Space.** The policy input includes: 1) _proproprioceptive information_ containing the orientation (in quaternion) and angular velocity of the floating base, and position and velocity for all measurable actuated and unactuated joints, 2) _terrain heightmap_ from a 1.5m by 1m area in front of the robot at a 5cm resolution (see Figure 4), which encodes the ground height at each point relative to the robot's floating base. The relative encoding means that the heights vary as the robot moves up and down during its gait, but enables us to avoid using global mapping and odometry estimation techniques, 3) _user commands_, which include X and Y linear velocities along with direction and rotational velocity around the robot's yaw axis, and 4) _periodic clock_, as used in prior work on locomotion control [9], which consists of two functions for each leg, \(\sin\left(2\pi(\phi_{t}+\gamma_{t}^{i})\right)\) and
Fig. 2: Overview of the locomotion policy with vision module.
\(\cos\left(2\pi(\phi_{t}+\gamma_{t}^{i})\right)\). Here \(i\in\left[\textit{left}\right.\), _right_ indicates the leg, \(\phi\) is a monotonically increasing phase variable that is incremented as \(\phi_{t+1}=\phi_{t}+\Delta\phi_{t}\), so that \(\Delta\phi_{t}\) varies the gait frequency, and \(\left[\gamma_{t}^{left},\gamma_{t}^{right}\right]\) are period shifts that alter the clock values for each leg, thus changing the contact sequence.
**Action Space.** The RL policy operates at 50Hz and outputs PD setpoints for all motors, which are provided to a PD controller operating at 2kHz. To enable gaits that can more flexibly adapt to the terrain, the RL policy also outputs three extra values, representing the clock increment \(\Delta\phi_{t}\) and residuals of shifts for both legs [\(\Delta\gamma_{t}^{left},\Delta\gamma_{t}^{right}\)].
**Policy Architecture.** We use a neural network to represent the policy for mapping observation sequences to actions. The policy architecture (Figure 3) contains two main components, a pretrained _blind policy_ and a _vision-based modulator_. The architecture motivation is for the blind policy to provide a baseline locomotion control signal, which is effective for moderate terrains. For more complex terrain, the vision-based modulator is then able to adjust the baseline control based on details of the local terrain.
The _blind policy_ is based on prior work [9] and uses an LSTM network to produce actions. This policy uses all of the available inputs, except for the heightmap information, and is trained on relatively flat ground with various gait frequencies and shifts. The resulting policy is robust to moderate disturbances. The input to the _vision-based modulator_ includes all of the available observations, including the heightmap, in addition to the action produced by the blind policy. This modulator outputs a "modulating action" as well as clock actions to modify the clock parameters.
### _Policy Training_
The policy is trained via the PPO actor-critic RL algorithm [32] in a MuJoCo simulation environment [33] where the robot aims to follow randomized commands over randomized terrains. The training is conducted using 80 cores on a dual Intel Xeon Platinum 8280 server on the Intel vLab Cluster. We used two modifications to standard PPO that helped speed up and stabilize learning. First, we modified the PPO loss function to include a mirror loss over robot proprioceptive inputs as well as visual inputs. This loss encourages the policy to choose symmetric actions when facing the same terrain that is symmetrically mirrored about the saggital plane. Second, since our reward function (described below) uses privileged information, we found it useful to provide the critic with privileged inputs on addition to the observations input to the policy. These privileged inputs include the robot height and feet positions in the global frame, a square 0.5m heightmap around each foot, and two rangefinder sensors from the tip of each foot. This privileged information provides the critic with a more accurate 3D picture around each foot, which helps it more accurately predict future rewards, such as unfavorable collision events with the terrain, as shown effective in [34].
**Training Episode Generation.** Each training episode involves a randomly generated terrain map of size 20m x 20m. Each map is one of 5 terrain types, illustrated in Figure 4, that are reflective of different human-centric environments. These include: 1) flat - the easiest terrain with no features, 2) hills - a smoothly varying heightmap, 3) blocks - randomly located blocks of random lengths, widths, and heights, 4) ridges - a sequence of random width and height ridges that span the length of the map, and 5) stairs - upward and downward stairs of varying width and height. The randomization parameters of each terrain type are listed in Table IV, and terrains are generated in a way to avoid intersecting features. Each episode selects a single terrain type according to the following probability distribution \([0.03,0.07,0.35,0.2,0.35]\), which puts the majority of the probability mass on the three most difficult terrains. This allows the policy to gain some experience on easier terrains, which is useful early in learning, but focuses most of the learning effort on more difficult terrain that requires careful integration of vision and locomotion. We found the key to train a robustness and non-aggressive control policy relies on the terrain generation distribution than iterating and adding heuristic-based reward terms. For example, stairs naturally regulate the step length and ridges regulate step height via the heightmap representation.
Given a terrain map, each training episode starts with the robot being randomly spawned in a standing position near the center of the map and facing a random direction. Next a random command is given to the policy from a list including: step-in-place, step-in-place-turn, walk, walk-turn, with a sampling probability of [0.05, 0.05, 0.6, 0.3]. The commanded X, Y, and turn velocities are uniformly sampled from the ranges [-0.5, 1.0]m/s, [-0.3, 0.3]m/s, and [-22.5, 22.5] degrees/s, respectively. When the robot is on the ridge, stair, or block terrain types, the velocity commands exclude backward and sideways movement, due to using only a single forward facing camera. After the initial command, once during each episode the command is randomly changed at a time randomly sampled from [200, 250] timesteps.
Each episode runs for a maximum of 400 timesteps, which is 8 seconds of simulated time. Episode termination conditions include: 1) roll or pitch angle of the floating base is greater than 15 degrees; 2) the norm of linear velocities of the base is greater than 1 plus the commanded velocity; 3) the robot base height is below 40cm in the global frame; 4) the robot's body collides with the terrain. The conditions correspond to undesirable robot behavior and implicitly punish the robot by causing it to not receive future rewards.
**Reward Function.** Our reward function aims to produce a gait with a well regulated contact sequence that is able to traverse the simulated environment in a way that is likely to transfer to
Fig. 3: Policy consists of a blind policy and a vision-based modulator.
a real robot. In particular, we use a three component reward function where all components are weighted equally, \(R=R_{0}+R_{\text{accel}}+R_{\text{collision}}\). The base locomotion component \(R_{0}\) is the reward function used in prior work on blind locomotion [9], which regulates the contact sequence and timing. This reward component encourages alternating swing and stance phases to align with the clock values provided to the policy.
Besides the base locomotion reward, we identified additional components as being important for facilitating sim-to-real transfer for complex terrains. The foot acceleration component \(R_{\text{accel}}\) penalizes the left and right foot accelerations \(\ddot{x}_{l}\) and \(\ddot{x}_{r}\). This reward helps prevent fast swing leg motions, which we found could arise during training on difficult terrain. Specifically, the reward is defined as \(R_{\text{accel}}=0.05\exp\left(-0.02\cdot(\lVert\ddot{x}_{l}\rVert+\lVert\ddot{x }_{r}\rVert)\right)\), which provides a more positive reward for smaller accelerations. The foot collision component \(R_{\text{collision}}\) adds a negative penalty of -5 whenever the forefront of the foot is stumbled by the terrain, which helps to achieve collision-free leg-swing trajectories. However, due to the nature of RL, this term only acts as a soft constraint that the robot may violate in favor of not falling down, in order to collect more reward throughout the training episode. We found training without this penalty term can work in simulation, but results in significantly more frequent foot stumble and collision events, and subsequently prevents sim-to-real transfer. Touch sensors are added at the forefront of each foot's collision model detects collision events and trigger the negative reward.
**Domain Randomization.** To enable successful sim-to-real transfer and diversify the data distribution during training, the training process involves a range of domain randomization, shown in Table II, over model parameters, actuation parameters, visual inputs, and delays. The model parameters are randomized per episode to simulate a range of robot models and also provide a wide range of state space that the policy can learn from. We found that randomizing the torque efficiency parameter is particularly important for sim-to-real on extreme terrain, such as a 0.5m step up, due to the torque saturation of the knee motor. In addition, the torque command sent to the simulator is delayed randomly up to 3ms. The visual inputs are randomized to simulate noise from the heightmap estimator and prevent the policy from over-fitting to the exact simulated heightmaps. Prior work [11] based on teacher-student training also found that heightmap randomization was important for sim-to-real transfer. The entire heightmap is shifted per episode and policy step in all directions to simulate temporal noises. The heightmap is passed into the policy with a randomized amount of delay up to 100ms, in order to account for faster locomotion speeds.
## V Heightmap Prediction from Egocentric Vision
The heightmap predictor is a neural network that maps a history of egocentric depth images and robot states into an estimated heightmap in front of the robot. This problem is challenging due to the aggressive camera motions, occlusions, and noisy images. Below, we describe the architecture and simulation-based training process used to achieve successful sim-to-real transfer.
**Network Architecture and Losses** Figure 5 shows the network architecture, which consists of two stages. For the first stage, we use an LSTM network so that the memory can help reconstruct missing information from the history of robot states and depth images. Training of this stage is done by minimizing the mean-squared error to the ground truth
\begin{table}
\begin{tabular}{c|l|l|l} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **Terrain** \\ \end{tabular} }} & \multirow{2}{*}{**Parameter**} & **Range** & **Unif** \\ \cline{3-4} \cline{6-4} & Joint Damping & [0.5, 2.5] & \% \\ \cline{2-4} \cline{6-4} \multirow{3}{*}{\begin{tabular}{c} Simulation \\ \end{tabular} } & Mass & [-0.25, 0.25] & \% \\ \cline{2-4} & Center of Mass Location & [-0.01, 0.01] & m \\ \cline{2-4} & Passive Spring Stiffness & [-0.00, 0.00] & Nm/rad \\ \cline{2-4} & Torque Efficiency & [0.9, 1.0] & \% \\ \cline{2-4} & Torque Delay & [0.5, 3] & ms \\ \hline \multirow{3}{*}{
\begin{tabular}{c} Heightmap \\ \end{tabular} } & Shift in XY & [-0.05, 0.05] per episode & m \\ \cline{2-4} & Shift in Z & [-0.1, 0.1] per episode & m \\ \cline{2-4} & Shift in Z & [-0.02, 0.01] per policy step & m \\ \cline{2-4} & Delay & [20, 100] & ms \\ \hline \hline \end{tabular}
\end{table} TABLE II: Parameters and ranges used in domain randomization. All parameters are uniformly sampled within the range.
Fig. 4: Types of terrain used in training.
\begin{table}
\begin{tabular}{c|l|l||l} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **Terrain** \\ \end{tabular} }} & \multirow{2}{*}{**Parameter**} & **Range** & **Range for Evaluation** \\ \cline{2-4} & & & _Easy_ & _Hand_ \\ \hline Ridge & height [m] & [0.05, 0.6] & [0.05, 0.5] & [0.5, 0.6] \\ \hline \multirow{3}{*}{\begin{tabular}{c} Stair \\ \end{tabular} } & height [m] & [0.05, 0.2] & [0.05, 0.1] & [0.1, 0.2] \\ \cline{2-4} & length [m] & [0.25, 0.4] & [0.4, 0.4] & [0.25, 0.4] \\ \cline{2-4} & steps & [4, 28] & [4, 12] & [12, 28] \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Block \\ \end{tabular} } & length or width & [0.4, 1] & [1, 1] & [0.4, 1] \\ \cline{2-4} & height [m] & [0.05, 0.4] & [0.05, 0.2] & [0.2, 0.4] \\ \hline \hline \end{tabular}
\end{table} TABLE I: Ranges for terrain randomization used in training and evaluation. All terrains are uniformly sampled within the range during training or evaluation.
Fig. 5: Predictor architecture. Heightmap is captured from hardware.
heightmap with heights relative to the robot floating base. We found that training just Stage 1 resulted in heightmaps with non-flat surfaces around corners and edges, which were difficult to correct via straightforward architecture and loss function modifications. To improve over the raw heightmaps, Stage 2 utilizes a U-Net architecture [35] which has shown to be effective in learning a pixel-to-pixel transformation. This stage takes in the raw heightmap and outputs a cleaned version of the same size. Stage 2 uses L1 loss and leads to a refined heightmap with sharper corners, edges, and flat surfaces.
**Simulation-Based Data Generation** Given a trained locomotion control policy, we use the simulator to execute episodes of the policy to collect training data. In particular we run the policy in stochastic mode, where the actions are sampled from the action distribution at each time step. This has the effect of producing data around the typical data distribution the policy will encounter at runtime. The dataset contains one trajectory for each policy-execution episode, where the trajectory at time \(t\) stores the robot state \(s_{t}\), the depth image \(I_{t}\) and the ground truth heightmap \(m_{t}\). Note that during data collection with the policy, we use the same domain randomization as used during policy learning. Simulation model adds an egocentric camera to generate depth images at each step. The pose is at the top of the floating-base and tilted by 60 degrees from horizontal plane, giving the range of view of approximately 2.5m by 2.5m in front of the robot, which is sufficient for the size of heightmap. The depth images are rendered in MuJoCo. Robot states only include foot positions in the floating base frame. The final training set contains 30,000 episodes.
**Depth Image Randomization** Depth images from real cameras are significantly more noisy than the clean depth images from simulated cameras. During the generation of training episodes we add a set of randomizations to the generated depth images to help bridge the sim-to-real gap. First, the camera pose and field of view (FOV) are randomized per episode. Camera pose shift has a range of \(\pm 1\) cm for XYZ and \(\pm 1\) degree for pitch angle. FOV shift has a range of \(\pm 1\) degree. After data collection, a post-processing step adds a set of random noises on top of the rendered depth images, including Gaussian noise, image rotation, edge noise, random small objects on the image, and spot noise. Each type of noise is included in an image with a probability of 0.3. We found that this combination of depth image randomization allowed sim-to-real for the learned predictor more effectively.
## VI Simulation Results
### _Policy Performance_
We use the trained policy, along with a number of different policy setups, to evaluate the performance in simulation for the ablation study. For each terrain we define an easy and a hard configuration as shown in Table I. For each policy setup, we collect 1000 episodes per terrain mode and compute three metrics as shown in Figure 7-A. _Ours_ is the policy trained using all setups and is for sim-to-real. Other setups are for controlled tests by removing one feature at a time. We also trained a blind policy across all terrains. In **Success Rate**, all policies have approximately the same performance at easy mode of terrains. For more difficult terrain modes, however, policies _w/o Learned Clock_ and _w/o Privilege Critic_ show significantly lower success rates. This means that control over the gait's contact timing and sequence as well as a more accurate value estimation improve policy performance over hard terrains. **Episodes with foot collision** shows that, compared to _Ours_, other policies have significantly more foot collisions events. These random foot collisions with the terrain could lead to failures. Indeed, **Terminations due to foot collision** indicates that collisions account for most failure cases overall. Although foot collisions lead to frequent failures, policy _w/o Foot Collision Reward_ has a similar success rate as _Ours_. When looking at policy _w/o Foot Collision Reward_ in simulation, the policy learns to deal with collisions and treat the collisions as potentially useful proprioceptive feedback when the robot touches terrains. For example, the robot will walk up high step-ups by sliding the foot along the vertical surface of the terrain.
### _Local Terrain Reconstruction_
We evaluated the heightmap predictor in simulation to validate the quality of the reconstruction shown in Table III. We also implemented other architectures to use for ablations, including an MLP model and a transformer-based model. A key distinction among the model architectures is the history representation. _LSTM_ has implicit history, _Transformer_ has a fixed window size of 0.6 seconds, and _MLP_ does not have history. Additionally, the comparison also includes an LSTM model, _LSTM w/o robot states_, that does not use robot states as input. Among all models, _LSTM_ achieves the best reconstruction loss. Without robot states as input, _LSTM w/o robot states_ produces worse reconstructions, indicating the requirement of proprioception and vision together to accurately estimate local terrain.
### _Closed-loop Evaluation_
We use each variation of the heightmap predictor to couple with the policy (_Ours_ in Figure 7-A) to evaluate the closed-loop performance in simulation shown in Figure 7-B. We use the same metrics used in policy performance. During each rollout, the predictor takes in the noisy depth images
Fig. 6: Depth image from simulation and real world, with corresponding real predicted heightmap and simulation heightmap.
from simulation rendering and the policy takes in the clean heightmap from the output of the Stage in Figure 5. In **Success Rate**, all predictors produce similar performance over each terrain mode. This potentially means that, regardless of the errors from reconstruction, the learned policy is robust enough to estimation noises. In **Episodes with foot collision**, compared to _LSTM_, other models show worse performance and produce more collision events. In **Termination due to foot collision**, compared to _LSTM_, other models fails with higher chances from unfavorable foot collisions.
## VII Sim-to-real Transfer
### _Experimental Setup_
We deployed the proposed system on the bipedal robot Cassie. To endow the vision system with fast inference, we added a D455 Intel Realsense camera and an NVIDIA Jetson Orin Nano module on Cassie. Depth images are post-processed with a hole-filling filter and distance clipping before being sent into the heightmap predictor. The main control policy is running on the robot's main computer and inference of the heightmap predictor is running asynchronously on the Jetson. The camera depth stream is set to 90 FPS and heightmap prediction inference runs at 200Hz. The end-to-end delay between the main control policy and the heightmap predictor is measured up to 20ms, including UDP communication delay, camera stream, and model inference. To create structured terrains similar to those in simulation training, we built wooden blocks with various heights and sizes in the lab environment shown in Figure 1.
### _Evaluation_
We tested the proposed system over various terrains. Overall, the system is able to control robot Cassie going over single high blocks, stairs, and random blocks, with various dimensions. Although the egocentric view of the camera cannot look underneath the robot, we found the policy is able to go over terrains while reasoning about the swing trajectory for the terrain underneath it. This observation potentially means that the policy keeps an internal odometry when approaching upcoming terrains. Also, the system enables the robot to go over a 0.5m high step-up, where the torque saturation happens on the leading stance leg when lifting the entire robot up from lower ground to a high step. We also tested the learned system on a treadmill while a human operator continually fed random blocks down the treadmill. The robot is able to go over the upcoming blocks and maintain the desired speed under these conditions. Please refer to the submission video for hardware demonstrations.
## VIII Conclusion
In this work, we proposed a fully learned visual-locomotion system using neural networks to control highly dynamic bipedal robots going over challenging terrains. To deal with constrained locomotion over complex terrains, we used simulation to train a robust control policy. Additionally, we identified several key factors to enable such policy performance, including adaptive gait control and collision-free swing leg control. We also provided key ingredients to convert an egocentric view from a single depth camera to a local terrain heightmap with simulation-only data. Using visual and proprioceptive information all in the local frame, the entire system ran onboard and achieved successful sim-to-real transfer without the need for explicit odometry estimation. We believe these key components are fundamental to our study as well as future research on learning vision-based dynamic locomotion for legged robots.
\begin{table}
\begin{tabular}{|l|l|} \hline Model Architecture & Reconstruction Loss (MAE) [cm] \\ \hline LSTM & **2.806** \\ \hline Transformer & 4.221 \\ \hline MLP & 4.932 \\ \hline LSTM (w/o robot states) & 4.448 \\ \hline \end{tabular}
\end{table} TABLE III: Reconstruction loss with various heightmap predictors.
Fig. 7: **A. Ablation study on policy training. B. Ablation study on heightmap predictor architecture. Each ablation study uses data collected from a range of terrains defined in Table I. **Success rate** indicates the robot does not fall down for 10 seconds of rollouts. **Episodes with foot collision** indicates the number of episodes that have one or more foot collision events occurred during rollouts, and such random collision events are unfavorable towards hardware deployment. **Termination due to foot collision** shows the percentage of foot collision events that lead to failures. All plots are evaluated with a confidence interval of 95%. |
2309.17089 | Too Big, so Fail? -- Enabling Neural Construction Methods to Solve
Large-Scale Routing Problems | In recent years new deep learning approaches to solve combinatorial
optimization problems, in particular NP-hard Vehicle Routing Problems (VRP),
have been proposed. The most impactful of these methods are sequential neural
construction approaches which are usually trained via reinforcement learning.
Due to the high training costs of these models, they usually are trained on
limited instance sizes (e.g. serving 100 customers) and later applied to vastly
larger instance size (e.g. 2000 customers). By means of a systematic scale-up
study we show that even state-of-the-art neural construction methods are
outperformed by simple heuristics, failing to generalize to larger problem
instances. We propose to use the ruin recreate principle that alternates
between completely destroying a localized part of the solution and then
recreating an improved variant. In this way, neural construction methods like
POMO are never applied to the global problem but just in the reconstruction
step, which only involves partial problems much closer in size to their
original training instances. In thorough experiments on four datasets of
varying distributions and modalities we show that our neural ruin recreate
approach outperforms alternative forms of improving construction methods such
as sampling and beam search and in several experiments also advanced local
search approaches. | Jonas K. Falkner, Lars Schmidt-Thieme | 2023-09-29T09:36:37Z | http://arxiv.org/abs/2309.17089v1 | # Too Big, so Fail? - Enabling Neural Construction Methods to Solve Large-Scale Routing Problems
###### Abstract
In recent years new deep learning approaches to solve combinatorial optimization problems, in particular NP-hard Vehicle Routing Problems (VRP), have been proposed. The most impactful of these methods are sequential neural construction approaches which are usually trained via reinforcement learning. Due to the high training costs of these models, they usually are trained on limited instance sizes (e.g. serving 100 customers) and later applied to vastly larger instance size (e.g. 2000 customers). By means of a systematic scale-up study we show that even state-of-the-art neural construction methods are outperformed by simple heuristics, failing to generalize to larger problem instances. We propose to use the ruin recreate principle [42] that alternates between completely destroying a localized part of the solution and then recreating an improved variant. In this way, neural construction methods like POMO [31] are never applied to the global problem but just in the reconstruction step, which only involves partial problems much closer in size to their original training instances. In thorough experiments on four datasets of varying distributions and modalities we show that our neural ruin recreate approach outperforms alternative forms of improving construction methods such as sampling and beam search and in several experiments also advanced local search approaches.
## 1 Introduction
Neural Construction (NC) methods [5; 30; 25; 12; 31; 51] have been the driving force behind the success of data driven and machine learning based solution approaches for routing problems since the advent of pointer networks [49] in 2015. Apart from the well known Traveling Salesmen Problem (TSP) the most prominent of these problems is the Capacitated Vehicle Routing Problem (CVRP) which involves the planning of several to serve a number of \(N\) customers from a single depot with capacitated vehicles [46]. Notwithstanding their success and demonstrated performance on small scale (usually uniform) data, NC approaches exhibit major problems with generalization to larger instances. We perform a comprehensive computational study to discover and evaluate the weaknesses of current state-of-the-art NC methods. The results show that generalization to larger problem sizes remains an unsolved problem for all methods no matter the inference strategy. Even advanced and expensive search approaches like Simulation Guided Beam Search (SGBS) [8] only lead to marginal improvements over a greedy search strategy as soon as instance sizes are more than twice the size of the instances in the training data. Moreover, existing inference methods fail to make effective use of increased inference times to achieve significant improvements in terms of final performance.
We take these results as motivation to design a new meta control method which transforms the constructive approach into an iterative improvement method applied to smaller sub-graphs of the original problem. The advantages of this formulation are twofold: (i) the constructive method is
applied to promising sub-graphs with a size close to the training set where NC methods have shown to outperform many heuristic approaches and (ii) the iterative formulation can make effective use of additional inference time by focusing on sub-graphs with high potential for further improvement.
### Generalization performance of NC methods
State-of-the-art constructive methods for the CVRP like POMO [31] create a solution sequentially by consecutively adding customer nodes to a tour until the capacity of the vehicle is exhausted. At this time the vehicle returns to the depot and a new tour is started. NC methods have shown to outperform simple construction heuristics like the sweep method [16] and the Clarke-Wright savings algorithm [9] on problem instances with uniformly sampled coordinates close to the size \(N\) of their training set [30]. The sweep method rotates a beam around the depot node adding customer nodes sequentially to a tour in the order they are passed by the beam whereas the Savings algorithm starts with singleton routes and creates tours by consecutively merging the two routes which lead to the largest saving, i.e. reduction in total cost. In Figure 1 we show the performance of POMO and SGBS [8], an advanced beam search approach with efficient rollouts and backtracking. For POMO we evaluated two different inference strategies, greedy and sampling. The greedy approach performs a greedy rollout by taking the node with maximum probability specified by the learned policy at each step. The rollouts are done in parallel for each possible augmented starting node from which a tour can be created. In contrast, the sampling approach performs a number of rollouts by sampling the next node from the stochastic policy model.
Figure 1 shows that the models, which were trained on a set of instances of size \(N=100\), still achieve reasonable performance for problems of size \(N=200\). For these instances of twice the size of the training instances they perform close or on par with the savings algorithm. However, all methods are already significantly outperformed by the heuristic for problems of size \(N=500\) and beyond. In sub-figure 1(c) we show the results on the well-known Uchoa benchmark [47]. The data involves varying coordinate, demand and capacity distributions. For that reason and in order to plot a smooth curve to enable a useful comparison, we put the results into bins of size 5 and take the
Figure 1: **(a)** and **(b)**: Results of constructive methods POMO [31] (greedy, sampling), SGBS [8] and Clarke-Wright Savings [9] on uniform and mixed data for different instance sizes. **(c)**: Results on Uchoa benchmark [47]. Note that for TAM-AM only results for \(N>500\) were reported in [21]. For the benchmark we also report the _best known solution_ (BKS). **(d)**: Normalized cost and run time of the POMO sampling approach for different sampling sizes on uniform and mixed data for instance size \(N=500\). Please note the logarithmic scale of the x-axes.
mean over each bin (the non-binned results can be found in the Appendix). Furthermore, we also show the benchmark results of the recent real-time NC method TAM-AM [21] and the best known solution as officially reported on CVRPLIB1. As can be seen, even this new state-of-the-art method is outperformed by the savings algorithm on the benchmark. Similar behavior for NC approaches has been observed for the TSP in [24]. Even emerging methods like [4] which are concerned with improving generalization of NC models to larger instances fail to achieve better performance than the savings algorithm just on problem sizes larger than \(N=100\). While the performance can generally be improved by training on larger instances such that the training distribution is closer to the final test distribution, this is not practical. The reason for that is the significant increase in complexity when training NC methods with reinforcement learning on large instances, since the sequential solution based on policy gradients requires to cache all gradient values for the update. Therefore, in this work we focus explicitly on the _generalization_ performance for larger problems.
Footnote 1: [http://vrp.galgos.inf.puc-rio.br/index.php/en/](http://vrp.galgos.inf.puc-rio.br/index.php/en/)
### Effective use of available inference time
Another dimension which has to be considered for NC methods is the actual use of the available inference time. In general, the argument can be made that NC approaches normally should quickly find a reasonable starting solution which then can be improved with heuristics or iterative methods in a second stage. However, because of their conceptual simplicity and the easy adaption to new problems, different approaches were proposed to leverage the learned information of the NC policy to find better solutions. These methods usually define some kind of search on the policy model and aim to efficiently traverse the corresponding search space while escaping local optima. The first such approach for NC methods solving routing problems was proposed in [5] and simply samples a number of trajectories from the stochastic policy. This sampling strategy can be seen as a form of scatter search guided by the learned policy but without backtracking. Later methods added smarter starting points, uncorrelated samples, data augmentation and softmax sparsification to the sampling strategy to improve results [31; 51; 4]. Nevertheless, these strategies very quickly have diminishing returns as can be seen in fig. 1(d). Even the advanced POMO sampling strategy achieves diminishing gains when doubling the sampling size. Other work proposes more complex search methods utilizing the probabilistic assignment space of the policy model [19; 29; 8]. Although such advanced search methods can help to increase the generalization performance, they often incur a significant increase in terms of computational resources and run times while only leading to marginal gains, as can be seen in figure 1 and our experiments in section 5.
Thus, we propose a new method we term _Neural Rui Recreate_ (NRR) designed to enable the use of learned construction heuristics for significantly larger CVRP instances than encountered in the training data. The main idea is to embed learned NC methods into a powerful improvement procedure which can effectively leverage the learned information of the NC policy model for larger instances. To that end we combine recent ideas to train a scoring function to estimate the assignment probability of nodes to different subsets [13] and the expected improvement [33] of applying a solution method (which is treated as a black box) to a particular sub problem. The resulting scoring function we use to select sub-graphs (SG) of the global solution graph which have a high remaining improvement potential. Our algorithm is defined in terms of a ruin-recreate procedure [42] and addresses several detailed design decisions consisting of: i) initial solution, ii) SG construction, iii) SG selection, iv) SG solution (update) and v) acceptance of the update.
**Our contributions:**
1. We show in a systematic scale-up experiment that neural construction methods do not generalize well to problem sizes beyond those seen during training.
2. We propose a new approach motivated by the well-established ruin recreate methodology to enable neural construction methods to efficiently solve routing problems which are up to 40x larger than the instances of their training set.
3. In a rigorous comparative study we evaluate the efficacy of our method compared to state-of-the-art constructive and improvement approaches. Our method shows competitive results significantly outperforming all other methods on the most difficult instances of the Uchoa benchmark [47] and the real-world instances of [33]. We release all our models and code2.
## 2 Preliminaries
**Problem Formulation**: The capacitated vehicle routing problem (CVRP) is an important NP-hard combinatorial optimization problem [46]. It extends the well-known Traveling Salesmen Problem (TSP) to the case with multiple vehicles. It is concerned with serving a number of \(N\) customers with coordinates in \(\mathbb{R}^{2}\) from a single depot. Each customer \(n\) has a demand \(q_{n}>0\) that needs to be served by \(K\) vehicles with homogeneous capacities \(Q\). Moreover, every tour has to start and end at the depot node and every customer node has to be visited exactly once. The objective is to minimize the total length of all tours in terms of a distance measure \(\delta:\mathbb{R}^{2}\rightarrow\mathbb{R}\) (usually euclidean distance).
**Neural Construction Methods**: Neural construction (NC) methods [5; 30; 25; 12; 31; 51] create a solution sequentially one node at a time. They normally utilize an encoder-decoder model where the encoder embeds the current problem state and the decoder is queried at each step to select the next node to add to the currently constructed tour. If the depot node is selected, the vehicle returns and a new tour is started. A masking scheme ensures that the problem constraints are satisfied. In the simplest case each customer which has already been visited is masked as well as any customer with a demand \(q\) larger than the remaining capacity \(Q_{k}\) of the currently employed vehicle \(k\). However, to tackle more complex problems advanced masking schemes can be employed [12; 32].
### Ruin recreate Principle
The ruin recreate (RR) principle was coined by [42] and is a general approach to solve complex combinatorial optimization problems. The key idea of RR is to first _ruin_, i.e. destroy, a significant part of the current problem solution and then to _recreate_ the destroyed part leading to a better overall solution. This concept is in strong contrast to local search (LS) approaches [2] which are commonly used to optimize routing problems and usually apply small changes to a solution to end up at a close but slightly different solution. In [42] it was demonstrated that, in particular for complex or very large problems, the RR approach leads to significantly better solutions than comparable LS methods, since it is able to better navigate the corresponding rough search spaces and to more easily escape from local optima by doing "large steps". On top of this procedure often an acceptance method like _Threshold Accepting_[11] or _Simulated Annealing_ (SA) [28] is used to guide the underlying search. The general procedure is described in Algorithm 1. For routing problems, the ruin step usually consists of removing a subset of the edges in a particular region of the global solution graph. Then the recreation step is concerned with reinserting useful edges to recreate a feasible solution.
```
input : Solution space \(S\), cost function \(c\), stopping criterion
1\(s\leftarrow\texttt{init}(S)\)// Construct initial solution
2whilenot stopping criteriondo
3\(g\leftarrow\texttt{select\_region}(s)\)// Choose part of solution to ruin
4\(s^{\prime}\leftarrow\texttt{ruin}(s,g)\)
5\(s^{\prime}\leftarrow\texttt{recreate}(S,s^{\prime})\)
6\(s\leftarrow\texttt{accept}(s,s^{\prime},c)\)// Decide wether to accept new solution
7return\(s\)
```
**Algorithm 1**Ruin Recreate
The RR concept has been reinvented several times over the years and is known under different names. To the best of our knowledge the principle was first applied in the method proposed in [10] for wiring of electronic circuits, which was called _Rip-Up and Reroute_. Later, _Large Neighborhood Search_ (LNS) [43], _Ruin Recreate_ (RR) [42] and _Partial OPtimization Metaheuristic Under Special Intensification Conditions_ (POPMUSIC) [40] were introduced, which apply the same principle to optimize VRPs. In the machine learning (ML) field the concept was first used with some learned components by _Neural Large Neighborhood Search_ (NLNS) [20] and _Learning to Delegate_ (L2D) [33]. However, all of these related approaches use slightly different strategies for the region selection, ruin and recreate steps (Alg. 1 lines 3, 4 and 5 respectively). Regarding the ML approaches NLNS employs an NC method in the recreate step to repair randomly destroyed sub-solutions, whereas L2D utilizes a learned method for region selection while the recreation is done with heuristic solvers. Thus, we are the first to propose a combined neural RR procedure using learned methods for both, selection (neural
scoring function) as well as recreate (NC method) steps. Moreover, we are the first to draw attention to the existence of the different RR methods which implement similar concepts under varying names and do the first comprehensive comparison of the corresponding approaches (see section 5).
## 3 Proposed Method
In order to apply NC methods to much larger problem sizes we propose to combine them with a ruin recreate type of improvement approach. The motivation is to apply the fully trained NC method to problems with a size close to that of their respective training instances. Our procedure first creates a set \(G\) of sub-graphs from the global graph \(s\) representing the current solution. Then we aim to select a suitable sub-graph \(g\) from the set \(G\). This SG is ruined by completely removing all of its edges (which represented the original part of the solution). Finally, we recreate a new solution \(s_{g}\) for sub-graph \(g\) with the respective NC method \(\pi\) and then re-insert it into the global solution \(s\). We describe the method in Algorithm 2 and visualize the general procedure in Figure 2. In the following sections we describe the different building blocks which constitute our method.
### Initial solution
Usually NC methods without advanced search procedures are concerned with constructing an initial solution. These initial solutions are crucial for following iterative improvement algorithms to find good final solutions. In particular, existing work [7; 50] has shown that randomly constructed initial solutions can often lead to sub-optimal improvement results when used for procedures which do not require significant levels of randomness as essential part of their improvement strategy (e.g. LKH3 [17] requires random starts for each of its trials). However, since the studied NC methods fail to produce decent initial solutions for large-scale problems, we use the Clarke-Wright savings algorithm [9] which showed strong results in our preliminary generalization study (see section 1.1).
### SG construction
Another crucial component of our algorithm is the construction of the set of sub-graphs \(G_{t}\) from the global solution graph \(s_{t}\) at iteration \(t\) shown in Figure 2 (b) and (c) and Alg. 2 (line 3). In the following we will omit the iteration index \(t\) to simplify notation. The set \(G\) represents the available SGs which can be selected for further improvement. The key idea here is to put sub-graphs \(g\) into \(G\) which are of approximately the same size \(N_{g}\approx N_{\text{train}}\) as the training instances of the respective NC
Figure 2: One iteration of our NRR procedure: from the current solution graph \(s\)**(a)** several SGs are constructed **(b)** and put into the set \(G\)**(c)**. Then a promising SG \(g\) is selected according to its improvement potential \(\hat{\gamma}_{g}\) estimated by the neural scoring function \(f_{\theta}\) and its edges are removed **(d)**. Finally, the NC method \(\pi\) is applied to recreate a new sub-graph solution \(s_{g}\)**(e)** which then is inserted into the global solution graph \(s\) to arrive at the new candidate solution \(s^{\prime}\)**(f)**.
method. There are different possible approaches one can utilize to construct suitable SGs. We choose to use the tours in the solution graph \(r\in s\) as an intermediate graph structure to facilitate the efficient construction of SGs. Furthermore, this approach has the direct advantage of grouping relevant graph parts on a local scale, a concept which is used in many combinatorial optimization methods [44; 41]. The insight is that optimal tours for the CVRP usually consist of nodes which are in each others local vicinity, while far away nodes normally do not belong to the same tour. The selection of different tours to create a respective sub-graph is facilitated by representing each tour by the geometric center \(\mu_{r}=\frac{1}{|r|}\sum_{n\in r}x_{n}\) of the coordinates \(x_{n}\in\mathbb{R}^{2}\) of all nodes \(n\in r\), where \(|r|\) is the size of tour \(r\), i.e. the total number of customers belonging to that tour. We experimented with different heuristics to construct suitable SGs from these tours. Details on the exact procedure can be found in the Appendix.
### SG selection
The next step of our algorithm (Alg. 2, line 4) is concerned with the selection of SG \(g\in G\) (in case of disjoint optimization several \(g_{1},\ldots,g_{k}\in G\)) to be ruined and recreated in the following step. In order to select useful SGs with a high remaining improvement potential, we follow [33; 13] in learning a neural scoring function \(f:\mathcal{G}\rightarrow\mathbb{R}\). This scoring function takes a sub-graph \(g\) as input and assigns it a scalar score \(\hat{\gamma}_{g}\) signifying the remaining potential for improvement when applying the respective NC method to recreate its solution. It is trained via regression on the actual improvement \(\gamma_{g}\) achieved by the NC method on a large set \(D\) of sub-graphs to estimate the remaining potential for unseen SGs during inference. The training set is created by running the respective NC method on varying sub-graphs (whose edges were removed) produced by the prior construction step and registering the achieved improvement \(\gamma_{g}\) as regression target for SG \(g\). Then we can learn a model to represent \(f_{\theta}\) with parameters \(\theta\) by minimizing the mean squared error (MSE) according to
\[\mathcal{L}_{MSE}=\frac{1}{|D|}\sum_{i=1}^{|D|}(\gamma_{g}-f_{\theta}(g))^{2}. \tag{1}\]
In order to fully capture the SG structure for accurate estimates, similar to [13] we use graph neural networks (GNNs) [37] to encode all nodes as latent vectors \(\omega_{\text{node}}\in\mathbb{R}^{d_{\text{emb}}}\) of dimension \(d_{\text{emb}}\):
\[\omega_{i}^{(l)}=\operatorname{GNN}^{(l)}(\omega_{i}^{(l-1)})=\sigma\left( \operatorname{MLP}_{1}^{(l)}(\omega_{i}^{(l-1)})+\operatorname{MLP}_{2}^{(l)} (\sum_{j\in\mathcal{H}(i)}e_{ji}\cdot\omega_{j}^{(l-1)})\right), \tag{2}\]
where \(\omega_{i}^{(l-1)}\in\mathbb{R}^{1\times d_{\text{emb}}}\) represents the embedding of node \(i\) at the previous layer \(l-1\), \(\mathcal{H}(i)\) is the 1-hop graph neighborhood of node \(i\), \(e_{ji}\) is the directed edge connecting nodes \(j\) and \(i\), \(\operatorname{MLP}_{1}\) and \(\operatorname{MLP}_{2}\) are Multi-Layer Perceptrons \(\operatorname{MLP}:\mathbb{R}^{d_{\text{emb}}}\rightarrow\mathbb{R}^{d_{ \text{emb}}}\) and \(\sigma()\) is a suitable activation function. In order to capture more global information, which is essential for accurate predictions, the node neighborhoods \(\mathcal{H}(i)\) are based on the fully connected representation of the original problem graph and consist of the \(k\) nearest neighbors of each node in terms of euclidean distance. The input features to the encoder model are the coordinates \(x_{n}\) and demands \(q_{n}\) of each customer node \(n\in s\).
Next, pooling is used to aggregate the node information for each SG by summing over the node dimension, producing sub-graph embeddings \(\omega_{g}\). Moreover, we pool over all node embeddings after
each GNN layer and aggregate them to create a complete embedding of the solution graph \(\omega_{s}\) which is concatenated with the SG embeddings \(\omega_{g}\) and fed into a final regression head consisting of a stack of MLP layers. Furthermore, we use ReLU [38] and layer normalization [3]. To better understand the effect of the chosen model architecture and pooling operators we perform an ablation study and report the results in the Appendix. Interestingly our model based on the simple order-1 GNN proposed by [37] significantly outperforms the more sophisticated graph attention networks (GATs) [48] which were also used in [33]. Since the update of a particular SG leaves the rest of \(s\) unchanged, it is very likely that a large subset of the same SGs is encountered in \(G\) for several iterations. Hence, we can achieve efficient processing by caching the scores for all evaluated SGs in a hash table for fast lookup, allowing to skip SGs which were already encountered in earlier iterations. After the score assignment different strategies can be utilized for the final selection of sub-graphs \(g\). The simplest option is a _greedy_ approach which always selects the SG with the highest score: \(g=\operatorname*{argmax}(\Gamma_{G})\) where \(\Gamma_{G}\) is the set of scores \(\gamma_{g}\ \forall g\in G\). Alternatively we may apply the softmax function to \(\Gamma_{G}\) and treat the set of scores as a distribution, _sampling_ according to the resulting probabilities. Finally, we can sample a subset of _disjoint_ SGs, optimize all of them and reinsert the ones into \(s\) which led to an improvement. In preliminary experiments we found that the last approach leads to the best results.
### SG solution
The solution of the SG is the main part of the ruin recreate procedure. First, the SG is ruined by completely dropping all of its edges (see Fig. 2 (d), Alg. 2, line 5). Thereby all existing tours in the SG are destroyed and all customer nodes are marked as not being served. This is in line with the general RR principle which requires the complete destruction of a substantial part of the solution [42]. After the ruin step the SG is treated as an independent CVRP and fed into the NC method \(\pi\) to recreate it by constructing a new solution (Fig. 2 (e), Alg. 2, line 6). A suitable configuration for the NC method has to be chosen to achieve useful improvement of \(g\) within reasonable time to be able to execute a sufficient number of iterations. For POMO the trade-off is mostly between a greedy decoding strategy and sampling with different number of samples. In general, we found that using SGBS in the recreate step requires too much time, limiting the number of iterations and in turn hurting final performance.
### Acceptance
The canonical RR procedure includes an explicit acceptance strategy for updates to increase the likelihood of escaping local optima. Hence, we employ the well-known simulated annealing (SA) [28] approach to control the acceptance of altered candidate solutions \(s^{\prime}\) which are created through the insertion of the recreated sub-graph \(g\) into the previous solution graph \(s\).
## 4 Related Work
Construction methods are one of the most prominent ML methods to solve routing problems. First, PointerNetworks [49] were introduced which use an encoder-decoder model with a masking mechanism called _pointer attention_. In following works this approach was improved by adding RL-based training [5], extending it to VRPs [39], replacing RNNs with Transformers [30], stabilizing training via multiple starting points POMO [31] or mutlip decoders MDAM [51] and adding advanced search strategies via simulation SGBS [8], dynamic programming [29] or active search [19]. TAMAM [21] employs a different approach learning a model to partition a VRP into subsets where each subset corresponds to a TSP which satisfies the global constraints and then using an NC model to solve these TSP instances independently from the global CVRP. Apart from NC methods also neural improvement approaches have been proposed which start at a solution and then iteratively improve it. Wu et al. [50] propose to parameterize heuristic operators like 2-opt with a learned policy model, an approach further improved in [34] with Dual-Aspect Collaborative Transformers (DACT), while a meta-controller for local search is learned in [14]. NeuRewriter [7] selects a region and rewrites it based on a heuristic rule set. LCP [26] improves this approach by repeatedly rewriting several segments in parallel. NLNS [20] employs an NC method as repair operator in a Large Neighborhood Search (LNS) [43] whereas a hierarchical problem decomposition approach is proposed in [53]. While existing methods utilize the NC model to repair partial solutions, we use them on a much larger scale, employing them to fully recreate a complete subgraph (which is a substantial part of the global solution graph) from scratch. Learning to delegate (L2D) [33] selects regions to update based on the
estimated improvement but use heuristics to reconstruct the chosen sub-solutions. In contrast, we reformulate the procedure in a more principled and general way by drawing the connection to the well established RR principle and employ NC methods to recreate SG solutions. Our results show that the careful design of the algorithm leads to significant performance increases. A different approach is used for NeuroLKH [52] which employs a learned model for the prior selection of the edge candidate set for the well-known heuristic solver LKH [17], which is commonly used as a baseline to compare ML based routing algorithms. Moreover, some prior work regarding generalization performance has been done by Jian et al. [23] who use meta-learning to generalize to different coordinate distributions while Fu et al. [15] devise a method to generalize a smaller pre-trained model to larger TSP instances, but require the expensive solution of a set covering problem. A detailed overview is given in [6; 35].
## 5 Experiments
**Datasets** To evaluate the generalization performance of different methods we focus our comparison on the well-known Uchoa benchmark dataset [47]. Moreover, we evaluate the models for which code is available on the original real world instances used in [33] and our own dataset which consists of mixed uniform and clustered coordinate data of size \(N\in[500,1000,2000,4000]\) (see Appendix).
**Baselines** We compare our method against the state-of-the-art NC methods POMO [31], SGBS [8] and TAM-AM [21] as well as the neural improvement approaches NLNS [20], L2D [33], DACT [34] and NeuroLKH [52]. Furthermore, we include the heuristic methods LKH3 [17], LKH-POP [18; 45], which uses POPMUSIC in combination with LKH3 and an implementation of the original RR procedure [42] using random region selection and best insertion [36] as recreate step (cp. Alg. 1).
**Evaluation protocol** We run all methods for a fixed time budget of T = 60/120/240/480s for problems of size N = 500/1000/2000/4000 respectively. All hyperparameters as well as the used hardware are described in detail in the Appendix. In order to support the reproducibility of our results we will make our datasets and the code for our model and all experiments available with the publication.
**Results** We report the final cost (total length of all tours) in Table 1 and plot the solution trajectories for the Uchoa and real world dataset in Fig. 4 and 4. NRR shows very strong performance, in particular for larger and more difficult instances. It outperforms all NC methods as well as most improvement approaches (L2D, RR and DACT) on all datasets. Only the very complex LKH methods (LKH3, NeuroLKH and LKH-POP) are in some cases able to slightly outperform our method. However, the large standard deviation (STD) for some of these methods on the \(N=4000\) and Uchoa dataset demonstrate that these approaches do not reliably converge to such good results and often can lead to considerably worse performance than NRR. This can be seen in Fig. 4 where LKH-POP achieves a good result with one "lucky" seed while the other runs are far worse, leading to the large STD band displayed. The fluctuations in the mean cost also show that some runs require substantial run time to even find a first solution. In contrast, our method shows reliable performance with consistently low STD and significantly outperforms LKH-POP on all other datasets. Furthermore, many of the
competitive results of baselines were only achieved after vastly exceeding the time budget (marked with "*" in Table 1 and displayed with an "x" marker labeled with the actual run time in the plots). Our comparison of the original NRR algorithm using Savings initialization with the alternative Sweep init shows that our method is also capable to achieve competitive performance even when starting from very bad initial solutions. This can also be seen in Fig. 3 and 4 where _nrr_sweep_ shows a very steep initial decrease in cost while some other methods are not even able to beat the Savings baseline (we want to stress here again that the Savings algorithm is a very simple heuristic method). Moreover, the figures show that the anytime performance of our method is particularly good. To verify this, we compute the area between each solution trajectory and a baseline cost. Taking inspiration from the DIMACS challenge [1] we set this baseline cost to 110% of the cost achieved by the savings algorithm, which is the fundamental method on which our analysis is based. Then we re-normalize the area by this cost to achieve values in [0, 1]. By analogy to the area under curve (AUC) metric we call this measure _Area under Savings curve_ (AUSC) and report the results in Table 2 (the exact calculation is described in the Appendix). As can be seen, our method exhibits very high values of AUSC, significantly outperforming all other methods in all but two cases. Furthermore, in order to investigate the effect of different decisions in our algorithm design, we perform an additional ablation study (because of space constraints we report the corresponding results in the Appendix).
## 6 Conclusion
This paper presents a well motivated new approach to enable NC methods, which were learned on small training instances, to scale to routing problems which are up to 40\(\times\) larger. NRR achieves this by introducing a meta procedure based on the ruin-recreate principle which is able to leverage the learned information of the neural construction model while focusing the improvement on regions with high potential. The comprehensive experiments show that NRR is able to significantly improve the performance and generalization capabilities of NC models, being able to substantially outperform all construction methods and even some advanced improvement approaches.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \hline
**Model** & **N=500** & **N=1000** & **N=2000** & **N=4000** & **Uchoa** & **real world** & **Avg** \\ \hline
**Savings** & 39.7 (0.0) & 72.6 (0.0) & 139.8 (0.0) & 229.0 (0.0) & 107.5 (0.0) & 155.5 (0.0) & 124.0 \\
**POMO (g)** & 45.0 (0.0) & 112.4 (0.0) & 293.7 (0.0) & 575.8 (0.0) & 132.8 (0.0) & 367.0 (0.0) & 254.4 \\
**POMO (s)** & 75.1 (0.4) & 195.7 (0.9) & 454.8 (1.1) & 882.6 (1.9) & 205.0 (0.7) & 523.1 (1.4) & 389.4 \\
**SGBS** & _43.5_ (0.0) & _105.9_ (0.0) & _274.9_ (0.0) & _539.4_ (0.0) & _128.1_ (0.0) & _348.7_ (0.0) & 240.1 \\
**LKH3** & 38.4 (0.1) & 71.3 (0.2) & 141.1 (0.7) & _250.6_ (3.7) & 106.0 (0.3) & _163.3_ (2.5) & 128.5 \\
**NeuroLKH** & **38.3** (0.1) & **71.1** (0.2) & 140.3 (0.5) & _224.2_ (44.7) & 105.9 (0.2) & 158.6 (0.5) & 123.1 \\
**LKH-POP** & 39.1 (0.2) & 73.6 (0.5) & _148.4_ (1.1) & 244.4 (5.7) & **102.1** (1.9) & _168.0_ (1.6) & 129.3 \\
**L2D** & 39.3 (0.3) & 72.4 (0.4) & 141.1 (0.5) & 234.1 (1.0) & 106.9 (0.4) & 156.3 (0.5) & 125.0 \\
**RR** & 39.5 (0.0) & 72.4 (0.0) & 139.5 (0.1) & 228.7 (0.1) & 106.5 (0.1) & 148.6 (0.0) & 122.6 \\
**DACT** & 47.3 (0.0) & 83.6 (0.0) & 158.5 (0.0) & NA & 122.5 (0.0) & 172.3 (0.0) & 139.4 \\ \hline
**NRR**-sweep & 39.0 (0.3) & 72.4 (0.5) & 141.6 (0.7) & 236.4 (0.7) & 104.3 (0.5) & 148.5 (0.3) & 123.7 \\
**NRR** & 38.6 (0.1) & 71.6 (0.1) & **138.9** (0.1) & **228.6** (0.1) & 103.8 (0.2) & **147.5** (0.1) & **121.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Final cost of all methods on different datasets. Best result is shown in **bold**. Standard deviation over three runs with different random seeds in brackets. For values marked with “*” the first results were only achieved (long) after the total time budget was exceeded (sometimes by an order of magnitude). The DACT code breaks for \(N=4000\) because of exp. increasing complexity which is why we report “NA”. The results for TAM-AM and NLNS are only available for the Uchoa benchmark, thus we only report them in an extended table in the Appendix but show them in Fig. 3.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline
**Dataset** & **LKH3** & **NeuroLKH** & **LKH-POP** & **L2D** & **RR** & **NRR-swp** & **NRR** \\ \hline \(N=500\) & 0.0282 & **0.0312** & 0.0064 & 0.0060 & 0.0050 & 0.0131 & 0.0253 \\ \(N=1000\) & 0.0105 & 0.0098 & 0.0 & 0.0003 & 0.0021 & 0.0010 & **0.0122** \\ \(N=2000\) & 0.0 & 0.0 & 0.0 & 0.0 & **0.0408** & 0.0 & 0.0011 \\ \(N=4000\) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0007 & 0.0 & **0.0014** \\
**Uchoa** & 0.0056 & 0.0094 & 0.0033 & 0.0032 & 0.0067 & 0.0254 & **0.0313** \\
**real world** & 0.0 & 0.0 & 0.0 & 0.0 & 0.0415 & 0.0418 & **0.0477** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Area under Savings curve (AUSC). The shown methods achieved at least once a value better than 1.1 \(\times\) the savings cost. All other considered baselines always have an AUSC of 0.0.
Appendix
### SG construction heuristics
As described in the main paper, we use the tours \(r\) as sub-structures to simplify the selection of sub-graphs. One straight forward way of selecting such tours is to specify a number \(K\) of nearest tour neighbors (measured by the euclidean distance between their geometric centers) and to select each existing tour as reference and adding its \(K\) nearest neighbors to form the sub-graph consisting of \(K+1\) tours. The obvious downside to this approach is that depending on the size of the tours \(|r|\), the size \(N_{g}\) of the resulting sub-graph \(g\) can be much smaller or much larger than \(N_{train}\) (the size of the instances in the NC training set). While in this case we can still assume that \(N_{g}<<N\), it can very well lead to worse performance. To circumvent this problem we propose two alternative methods for the construction of \(G\). First, instead of fixing \(K\), we can add neighboring tours sequentially until the SG size is very close to the size of the training instances, i.e. \(|N_{\text{train}}-N_{g}|<\epsilon\). The second approach follows the same concept but is motivated by the sweep construction method adding tours sequentially to the SG when their center \(\mu_{r}\) is passed by a beam which is rotated around the depot. Then, when \(|N_{\text{train}}-N_{g}|<\epsilon\), the SG is complete and the next tour in sequence is added to a new SG. The direct advantage of this method is that the resulting SGs are completely disjoint. This enables our method to better use the computation capacity of modern GPUs by solving a full batch of SGs via the NC method in parallel. Otherwise, if no disjoint SGs are required, then in order to create a larger and more diverse set of potential SGs we can rotate the beam several times, restarting it at different tour centers and afterwards remove any duplicates.
### Ausc
We propose the _Area under Savings curve_ (AUSC) as metric to measure the anytime performance of the compared methods w.r.t. a baseline cost, in our case the cost achieved by the savings algorithm. Since many methods are not even able to beat this very simple heuristic, we set the actual baseline cost to 110% of the savings performance. Then we compute the full area below the savings curve and the solution cost trajectory of each method and normalize it by the total area under the savings curve:
\[\text{AUSC}(c_{\text{m}})=\frac{\int_{0}^{T}c_{\text{savings}}-\int_{0}^{T} \min(c_{\text{m}},c_{\text{savings}})}{\int_{0}^{T}c_{\text{savings}}} \tag{3}\]
where \(T\) is the total time budget for the solution and \(c_{\text{m}}\) is the cost trajectory of the corresponding method **m** while \(c_{\text{savings}}\) is \(1.1\times\) the constant savings cost. In order to compute the area under curves with discrete time stamps of measurements, we use the composite trapezoidal rule. We visualize the concept in Figure 5
### Dataset generation
We generate datasets with different numbers \(N\in\{500,1000,2000,4000\}\) of customer nodes. The coordinates are sampled based on a mix of clustered and uniform data. For the uniform data we simply sample uniformly from the unit square like prior work [39; 30]. The clustered data is sampled from a Gaussian Mixture Model where the number \(K\) of mixture components is randomly selected between 1 and 10 for each instance. The mean of the components is sampled from a standard Normal distribution \(\mu\sim N(0,1)\) and the (diagonal) covariance matrix \(\Sigma\) is sampled uniformly from \([0.05,0.1]\). The weights are sampled as random integers between 1 and 9, normalized by a homogeneous constant vehicle capacity \(Q=50\) for all problem sizes. Finally, the fraction of uniformly sampled points compared to clustered points is sampled according to a beta distribution with parameters \(\alpha=0.5\) and \(\beta=9\). The resulting sampling procedure is able to generate diverse problem instances with varying coordinate distributions. In Figure 6 we show some examples of the resulting instances.
### Hyperparameters and hardware
Neural Scoring FunctionThe embedding dimension \(d_{\text{emb}}\) of our neural scoring function \(f_{\theta}\) is set to 128 as well as the hidden dimension of the node encoder and sub-graph encoder to 128, whereas the decoder uses a hidden dimension of 256. The node encoder uses 4 GNN [37] layers while the SG encoder and the decoder use 3 linear feed forward layers each. For each layer we utilize ReLU [38]
activation functions and additional layer normalization [3]. Between GNN layers we add residual connections. For the decoder we add dropout with a dropout probability of 0.1. As pooling operators for the node encoder we use summation and standard deviation while the SG encoder employs summation and maximum pooling, which are invariant with respect to padding the inputs with zeros in order to create SGs of similar size for batched processing. For training we use the Huber-loss [22] as loss function and the Adam optimizer [27] with an initial learning rate of 0.0005, which is halved every 35 epochs. Moreover, we clip the gradient norm above a value of 0.5. We train our model for 80 epochs with a batchsize of 128. The KNN neighborhood graph \(\mathcal{G}\) which is used for the node neighborhoods \(\mathcal{H}\) is created with \(K=25\).
Neural Ruin RecreateFor the training of the NC method employed in NRR we use the original hyperparameters reported in [31; 8]. The experiments on uniform data are performed with the checkpoints provided by the authors. For the other experiments we retrain the POMO model (which is also used for the POMO and SGBS baselines) on mixed data of size \(N=100\). NRR is initialized with a savings solution. It uses the sweeping based tour selection for the SG creation (see section A.1) to create disjoint sub-graphs. Then it scores these SGs via \(f_{\theta}\) and selects up to 16 SGs which are fed to the NC method to be reconstructed. As NC method we employ the POMO [31] method with
Figure 5: Visualization of AUSC. The green area between the savings cost and the method solution trajectory is the AUSC value.
Figure 6: Example plots showing the coordinates of generated instances. The red square represents the depot node.
sampling and a sample size of 1024. For acceptance we use simulated annealing (SA) [28] with restarts after 25 iterations without improvement.
All models and baselines were run on a Intel Xeon Gold 6230 CPU (with 8 cores available) and if required a NVIDIA GeForce RTX 3090 GPU. The model and experiment code as well as the used datasets are released via github: [https://github.com/jokofa/NRR](https://github.com/jokofa/NRR).
## Appendix B Ablation Studies
### Neural scoring function
In order to evaluate the effect of our design decisions for the neural scoring function we report the results of an ablation study in table 3.
We compare our original model using HuberLoss, GraphConv [37] sum and std pooling and aggregation of the node embeddings over all GNN layers with MSELoss, graph attention networks (GAT) [48], different pooling operators and no aggregation. The results show that the HuberLoss works much better than MSE loss and that the simple GNN proposed in [37] significantly outperforms the much more advanced GAT. Although the effect of different pooling operators and the aggregation is smaller, our configuration shows the best results, in particular in terms of MSE.
### Neural Ruin Recreate (NRR)
A second ablation study is performed to evaluate the effect of different configurations for the NRR procedure. All models are run on the mixed data validation set of size \(N=500\). We compare different construction methods (see section A.1), selection approaches and different solver modes for the NC method, sampling (with the corresponding number of samples) vs. greedy (which always uses a rollout for each customer node, i.e. \(N\)). Furthermore, we compare SA to greedy acceptance of improving moves.
The results in table 4 show the effect of the chosen sweep construction and disjoint SG selection methods for different numbers of selected SGs and samples. The first row represents our original model configuration. Under the time budget of 60s our configuration seems to be a good trade-off between the parallelization capacity on the GPU using more SGs and the solver performance controlled by the number of samples. SA which also accepts solutions which are slightly worse than the current best seems to help to escape from local optima which are encountered by greedy acceptance. Other combinations of construction and selection methods like sweep construction with greedy or multi selection as well as knn or add_nn (which adds neighboring tours until the SG size is close to a specified value) lead to significantly worse performance.
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Configuration** & **MAE** & **MSE** \\ \hline _original_ & 0.1045 & 0.0239 \\ MSE loss & 0.1091 & 0.0263 \\ GAT & 0.1176 & 0.0328 \\ max pool & 0.1056 & 0.0249 \\ mean pool & 0.1053 & 0.0253 \\ sum pool & 0.1048 & 0.0261 \\ no aggr & 0.1050 & 0.0254 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study for neural scoring function model run on the validation set of scored sub-graphs. We report mean absolute error (MAE) and mean squared error (MSE).
\begin{table}
\begin{tabular}{l l l l l l|l|l} \hline \hline
**init** & **construct** & **select** & **n\_mult** & **mode** & **n\_samp** & **accpt** & **cost** \\ \hline _savings_ & _sweep_ & _disjoint_ & _16_ & _sampl_ & _1024_ & _SA_ & **36.964** \\ \hline savings & sweep & disjoint & 12 & sampl & 1024 & SA & 37.005 \\ savings & sweep & disjoint & 8 & sampl & 1024 & SA & 37.015 \\ savings & sweep & disjoint & 24 & sampl & 1024 & SA & 37.015 \\ savings & sweep & disjoint & 32 & sampl & 1024 & SA & 36.970 \\ savings & sweep & disjoint & 8 & sampl & 2048 & SA & 37.042 \\ savings & sweep & disjoint & 24 & sampl & 2048 & SA & 37.052 \\ savings & sweep & disjoint & 32 & sampl & 2048 & SA & 36.981 \\ savings & sweep & disjoint & 8 & sampl & 512 & SA & 36.992 \\ savings & sweep & disjoint & 12 & sampl & 512 & SA & 36.997 \\ savings & sweep & disjoint & 16 & sampl & 1024 & greedy & 37.013 \\ savings & sweep & disjoint & 12 & sampl & 1024 & greedy & 36.993 \\ savings & sweep & disjoint & 8 & sampl & 1024 & greedy & 36.993 \\ savings & sweep & disjoint & 8 & sampl & 2048 & greedy & 36.996 \\ \hline savings & sweep & greedy & 1 & greedy & \(N\) & SA & 37.587 \\ savings & sweep & greedy & 1 & sampl & 512 & SA & 37.462 \\ savings & sweep & multi & 8 & greedy & \(N\) & SA & 37.226 \\ savings & sweep & multi & 8 & sampl & 512 & SA & 37.049 \\ savings & knn & greedy & 1 & greedy & \(N\) & SA & 37.448 \\ savings & knn & greedy & 1 & sampl & 512 & SA & 37.720 \\ savings & knn & multi & 8 & greedy & \(N\) & SA & 37.284 \\ savings & knn & multi & 8 & sampl & 512 & SA & 37.515 \\ savings & add\_nn & greedy & 1 & greedy & \(N\) & SA & 37.338 \\ savings & add\_nn & greedy & 1 & sampl & 512 & SA & 37.312 \\ savings & add\_nn & multi & 8 & greedy & \(N\) & SA & 37.033 \\ savings & add\_nn & multi & 8 & sampl & 512 & SA & 37.342 \\ savings & add\_nn & multi & 16 & greedy & \(N\) & SA & 37.063 \\ savings & add\_nn & multi & 16 & sampl & 128 & SA & 37.338 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study for NRR. We report the final cost, i.e. total length of all routes.
## Appendix C Additional plots
Figure 7: Results of constructive methods POMO [31] (greedy, sampling), SGBS [8] and Clarke-Wright savingsings [9] on the Uchoa benchmark [47]. Non binned plots which show high fluctuation because of the different recurring configurations for the coordinate and demand distribution. Note that for TAM-AM only results for \(N>500\) were reported in [21]. We also report the _best known solution_ (BKS).
Figure 8: Solution trajectories for methods on 50 problem instances of size \(N=500\) from a mixed uniform and clustered distribution. All methods were run for 3 random seeds, displaying the standard deviation bands. Methods which did not produce a first feasible result within the time budget are shown with a “x” marker on the right of the figure, labeled with the actual run time.
Figure 9: Solution trajectories for methods on 50 problem instances of size \(N=1000\) from a mixed uniform and clustered distribution. All methods were run for 3 random seeds, displaying the standard deviation bands. Methods which did not produce a first feasible result within the time budget are shown with a “x” marker on the right of the figure, labeled with the actual run time. |
2309.06184 | Design monolayer iodinenes based on halogen bond and tiling theory | Xenes, two-dimensional (2D) monolayers composed of a single element, with
graphene as a typical representative, have attracted widespread attention. Most
of the previous Xenes, X from group-IIIA to group-VIA elements have bonding
characteristics of covalent bonds. In this work, we for the first time unveil
the pivotal role of a halogen bond, which is a distinctive type of bonding with
interaction strength between that of a covalent bond and a van der Waals
interaction, in 2D group-VIIA monolayers. Combing the ingenious
non-edge-to-edge tiling theory and state-of-art ab initio method with refined
local density functional M06-L, we provide a precise and effective bottom-up
construction of 2D iodine monolayer sheets, iodinenes, primarily governed by
halogen bonds, and successfully design a category of stable iodinenes,
encompassing herringbone, Pythagorean, gyrated truncated hexagonal, i.e.
diatomic-kagome, and gyrated hexagonal tiling pattern. These iodinene
structures exhibit a wealth of properties, such as flat bands, nontrivial
topology, and fascinating optical characteristics, offering valuable insights
and guidance for future experimental investigations. Our work not only unveils
the unexplored halogen bonding mechanism in 2D materials but also opens a new
avenue for designing other non-covalent bonding 2D materials. | Kejun Yu, Botao Fu, Runwu Zhang, Da-shuai Ma, Xiao-ping Li, Zhi-Ming Yu, Cheng-Cheng Liu, Yugui Yao | 2023-09-12T12:52:31Z | http://arxiv.org/abs/2309.06184v2 | # Design monolayer iodinenes based on halogen bond and tiling theory
###### Abstract
Xenes, two-dimensional (2D) monolayers composed of a single element, with graphene as a typical representative, have attracted widespread attention. Most of the previous Xenes, X from group-IIIA to group-VIA elements have bonding characteristics of covalent bonds. In this work, we for the first time unveil the pivotal role of a halogen bond, which is a distinctive type of bonding with interaction strength between that of a covalent bond and a van der Waals interaction, in 2D group-VIA monolayers. Combing the ingenious non-edge-to-edge tiling theory and state-of-the-art ab initio method with refined local density functional M06-L, we provide a precise and effective bottom-up construction of 2D iodine monolayer sheets, iodinenes, primarily governed by halogen bonds, and successfully design a category of stable iodinenes, encompassing herringbone, Pythagorean, gyrated truncated hexagonal, i.e. diatomic-kagome, and gyrated hexagonal tiling pattern. These iodinene structures exhibit a wealth of properties, such as nontrivial novel topology, flat bands and fascinating optical characteristics, offering valuable insights and guidance for future experimental investigations. Our work not only unveils the unexplored halogen bonding mechanism in 2D materials but also opens a new avenue for designing other non-covalent bonding 2D materials.
_Introduction. --_ Two-dimensional (2D) materials have been a hot topic since the discovery of graphene. Up to now, as one of the most important members of 2D materials, increasing Xenes made from group-III to VI elements have been predicted and synthesized, e.g., borophene [1; 2], silicene [3], phosphorene [4] and tellurene [5]. Generally, disparate valence electron configurations among diverse main group elements engender markedly distinct structural arrangements, bonding behaviors, and material characteristics within the various Xene families. Yet, as one of the most intriguing members of the Xene family, rare study on group-VII Xenes has been done. Undoubtedly, exploring the theoretical foundations of group-VII Xenes will enhance our understanding of 2D materials and generate a more extensive range of practical applications.
Solid halogen crystals exhibit a layered structure in the Cmca space group [6], with stacked planar monolayers. Among the halogens, only iodine maintains its crystalline phase under ambient conditions. The halogen bond (XB, X stands for halogen), akin to hydrogen bonds as a non-covalent interaction [7; 8], plays a crucial role in stabilizing bulk iodine, despite often being overlooked. Recently, few-layer iodine nanosheets have been experimentally obtained from bulk iodine by physical exfoliation [9; 10; 11; 12], but it remains unclear about the fine structure of the iodinenes. On the other hand, it is noteworthy that XBs have been utilized for fabricating halogen-bonded organic frameworks (XOFs) [13; 14] successfully with reliable stability in both solid and solution phases. Then the questions arise naturally: could 2D iodinenes be constructed by utilizing halogen bonding interactions, how to design them, and what interesting properties do such 2D materials have?
In this work, based on the XBs, we predict a series of monolayer iodinenes as a new category of 2D materials utilizing the tiling theory and first-principle calculations. First of all, through our meticulous DFT calculations, including the calculations of lattice parameters and phonon spectra, we find that M06-L [15] is an appropriate exchange-correlation functional for evaluating the XB and realize that the XB plays a key role in the formation of stable structures of halogen crystals. Herringbone iodinene, as monolayer iodinene exfoliated from bulk iodine, shows dynamic stability resulting from XB networks. Afterward, according to the interaction model of XB and taking into account the diversity of XB networks, we systematically design a series of structures of iodinenes based on halogen-bonded iodine molecules and the tiling theory to map the structure. Finally, we investigate the electronic structures of our designed iodinenes. All of those halogen-bonded iodinenes are semiconductors and exhibit nontrivial band topology, including herringbone, Pythagorean, gyrated truncated hexagonal (GTH), i.e. diatomic-kagome, and gyrated hexagonal (GH) iodinenes. Specifically, herringbone, Pythagorean, GTH/diatomic-kagome, and GH iodinenes are intrinsic Stiefel-Whitney topological insulators with higher-order bulk-edge correspondences, among which Pythagorean, GTH/diatomic-kagome, and GH iodinenes are \(Z_{2}\) topological insulators exhibiting helical edge states with appropriate filling. These iodinenes possess flat bands that result from
XB interactions and special structure, leading to remarkable absorption coefficients that exceed \(10^{5}cm^{-1}\) in the visible region, which reveals potential for photoelectronic application.
_The halogen bond and selection of DFT functional._ -- Figure 1(a) depicts the layered structure of bulk iodine. Specifically, each planar layer within the bulk iodine is formed by halogen-bonded iodine molecules. The electrostatic interaction model offers an intuitive image of XBs, as depicted in Fig. 1(b), which displays the electrostatic potential (ESP) map of a monolayer iodine sheet. The unique ESP map is derived from the inherent anisotropic charge density distribution (see Fig. S1(b)) of iodine molecules themselves, where the electron density is accumulated around the equator and depleted on the elongation of the covalent bond. The depleted region is the so-called \(\sigma\)-hole [16], corresponding to the blue tip in Fig. 1(b) with positive ESP. The XB is defined as the attraction between a nucleophilic region and the positive electrostatic region of a \(\sigma\)-hole, analogous to a hydrogen bond [17]. Additionally, weak Van Der Waals (vdW) forces dominate the interaction between layers along the \(\mathbf{c}\) axis in the bulk iodine. This arrangement reveals the potential for exfoliation to yield a freestanding monolayer of iodine, given the significantly greater strength of the XB compared to vdW interactions.
A comparative analysis of various GGA and meta-GGA functionals was conducted to evaluate their computational accuracy in modeling the interactions within bulk iodine. The results indicate that M06-L accurately captures the attractive XB interaction in bulk iodine, whereas SCAN [18] does not exhibit this capability (see Fig. S2). This finding is consistent with the study by George et al. [19]. It is pertinent to highlight that the utilization of PBE [20] and PBEsol [21] functionals for assessing the interaction associated with halogen bonding is deemed inappropriate (as delineated in Fig. S1&S2 and Table S1), despite their application in recent research [22]. To account for long-range dispersive interactions, the DFT-D3 method [23] was employed, resulting in a more accurate geometry configuration for bulk iodine (as demonstrated in Table. S1). Therefore, M06-L+D3 was chosen for the DFT calculations. Further computational details can be found in Supplemental Material (SM) [24].
_Design iodinenes with the tiling theory._ -- Firstly, a monolayer iodine is exfoliated from bulk iodine, and we call it herringbone iodine since the profile resembles herringbone. After relaxation, the intramolecular bond length and XB length are 2.75 and 3.60 A respectively with only minor deviations from the bulk counterparts. Additionally, the electrostatic potential (ESP) map of the herringbone iodine is similar to that of the monolayer iodine sheet in bulk [see Fig. 1(b)]. There is no imaginary frequency in phonon spectra (Fig. 1(c)) of freestanding monolayer iodine, revealing its dynamic stability. The XB network plays a crucial role in enabling individual iodine molecules to link with each other to form a planar structure, while herringbone iodine would bear dynamic instability if XB is not included (see Fig. S5).
Considering the crucial role of XB in forming stable bulk iodine and exfoliated monolayer, we now provide a general and effective bottom-up construction of monolayer iodinenes based on XB, with the usage of ingenious non-edge-to-edge tiling theory, to find all the structures as exhaustively as possible. Geometrically, it is evident that herringbone iodine can be achieved by placing iodine atoms at each vertex of a hexagonal herringbone tiling, with the long edges representing the XBs. Therefore, we aim to construct iodinenes by bonding diatomic iodine molecules with XBs and utilizing the tiling theory [25; 26; 27].
The molecular orbital picture of the halogen bonding between two iodine molecules provides a valuable reference for constructing the halogen bonding network under investigation. As shown in Fig. 1(d), each molecule has one \(\sigma_{p_{x}}\) bond, two \(p_{y}\) orbitals (lone pairs) and one \(\sigma^{*}_{p_{x}}\) antibond orbital [28]. \(s\) orbital forms a core pair and the \(p_{z}\) orbital lies out of the plane, so they are ignored in the formation of XB. Electrons on the top diiodine could be transferred from \(p_{y}\) lone pair to the \(\sigma^{*}_{p_{x}}\) antibond orbital on the bottom diiodine, which is the orbital interaction picture of halogen bonding [29]. For the case of herringbone iodine, every molecule acts as both an electron donor and acceptor, constituting an XB net (red dashed line in Fig. 1(b)). Hence from the XB interaction picture in herringbone iodine, several essential points can be derived to guide the construction of iodine. (1) Each diio
Figure 1: (a) 3D crystal structure (the upper panel) of bulk iodine and its monolayer (the lower panel). Black bold and red dashed lines indicate intramolecular covalent bonds and intermolecular halogen bonds (XBs), respectively. (b) Electrostatic potential (ESP) map on 0.015 a.u. charge density isosurface of monolayer herringbone iodine. \(\sigma\)-hole occurs at the blue tip on the isosurface with positive electrostatic potential. (c) Phonon spectra of monolayer herringbone iodine. (d) Schematic diagram of XB between two iodine molecules. The iodine atoms, \(p_{y}\) orbitals (lone pairs), \(\sigma_{p_{x}}\) bond and \(\sigma^{*}_{p_{z}}\) antibond orbital (\(\sigma\)-hole) are depicted as purple balls, green spindles, yellow ellipsoid and blue ellipsoid, respectively. \(p_{z}\) and \(s\) orbitals are not shown here because \(p_{z}\) sits out of the plane and \(s\) is the core orbital. The red dotted line signifies the presence of the XB, involving the transfer of electrons from a lone pair to a \(\sigma\)-hole through a donor-acceptor interaction. (e) Schematic diagram of XB angle.
-dine molecule exhibits two \(\sigma\)-holes and two lone pairs within the same plane, enabling each atom to form two connections with other diiodines. Constructing monolayer iodinee based on the XB network is equivalent to a one-to-one correspondence between the \(\sigma\)-hole and the lone pair. (2) Iodineene is expected to be planar, as all \(\sigma\)-holes and lone pairs involved in the XB are confined to the same plane. (3) Every iodine atom should be considered equivalent to one another in principle. There is no justification for discrimination among them, as the XBs occur uniformly between diiodine molecules.
The tiling theory can help us construct iodinenes systematically. If any two polygons are either disjoint or share one vertex or one entire edge in common, tiling by convex polygons is called edge-to-edge tiling [30; 31]. In contrast, if adjacent tiles share only part of their edge, this is so-called non-edge-to-edge tiling. Some works have predicted structures of 2D covalent materials utilizing the tiling theory [25; 26; 27], more specifically, within the edge-to-edge classification. Regarding iodinenes, there are refined concepts that involve non-edge-to-edge tilings. Firstly, each atom is treated as a vertex, and intramolecular bonds and XBs are represented as edges in the tiling. Secondly, each vertex is connected to three edges, two of which have equal length, which corresponds to the case that each iodine atom connects a covalent bond and two equidistant XBs. Additionally, adjacent edges with equal lengths cannot be collinear, because \(\sigma\)-hole and \(p_{y}\) lone pair on the same atom cannot be in alignment. Lastly, all vertices should be equivalent, meaning they are related by the symmetry of the tiling, known as vertex-transitive or uniform tiling [30; 31]. Based on this analysis, we can identify the required tilings from the existing non-edge-to-edge patterns [32]. The results are presented in Fig. 2.
Five non-edge-to-edge tilings (see Fig. 2(a1)\(\sim\)(a5)) are selected from the existing 91 types of uniform tiling [32] to map the tiling pattern to iodinee structure. Hence, the nomenclatures of those iodinenes could be labeled by their tiling patterns. The initial length of the short and long edges are set as 2.75 and 3.60 A respectively, according to the intramolecular and XB length in the case of monolayer herringbone iodinee, and all XBs'angle (see sketch map at Fig. 1(e)) are set as 180 \({}^{o}\). After structural relaxation, all the initial structures are slightly distorted. Bonding features could be seen from the charge density and electrostatic potential (ESP) map (Fig. S6 & S7). The herringbone pattern (Fig. 2(b1)) is the same as the foregoing herringbone iodinee. The Pythagorean tiling (Fig. 2(a2)) is tessellated by two different size squares, whose name originates from the _Pythagorean_ theorem, so we call the corresponding structure as Pythagorean iodinee (Fig. 2(b2)). Gyrated truncated hexagonal tiling (GTH, see Fig. 2(a3)) is composed of regular hexagons and regular trigons, resembling the diatomic-kagome lattice topologically. As for the gyrated hexagonal tiling (GH, see (Fig. 2(a4))), XB forms the entire edge of the hexagon and partial edge of the trigon, which is reversed from the case of GTH. The fifth pattern is truncated distorted trihexagonal tiling (TDT, see (Fig. 2(a5))), which is composed of small regular trigons and big distorted truncated trigons. More details such as the symmetry and Wyckoff position of these iodinenes are shown in Table. S2. The phonon spectra are calculated (see Fig. S10), revealing that herringbone, Pythagorean, and GTH/diatomic-kagome iodinenes exhibit no imaginary frequency. GH displayed a small imaginary frequency near the \(\Gamma\) point, but it could be eliminated with a mere 4% tensile strain.
To comprehensively explore the potential structures of monolayer iodinenes, we employed additional conventional methods to generate more 2D configurations. (a) We assume some simple lattices, including square, triangle, honeycomb, lieb, kagome, etc.(see Fig. S19). These iodinenes behave as
Figure 2: (a1-a5) Five non-edge-to-edge tilings for mapping to desired monolayer iodinenes and (b1-b5) corresponding relaxed crystal structures. The nomenclature of those tilings is adopted in a visualized way. herringbone: herringbone tiling; GTH: gyrated truncated hexagonal tiling, which is topologically equivalent to diatomic-kagome structure; GH: gyrated hexagonal tiling; TDT: truncated distorted trihexagonal tiling. In (b1-b5), the covalent bond and halogen bond are depicted by black and red lines respectively. Grey dot lines indicate the unit cell.
conventional covalent 2D crystals where iodine serves as a polyvalent element and are highly energetic and dynamically unstable (see Fig. S21), part of which is even more energetic than gas diiodine that is not likely to form a stable freestanding crystal (see Table. S2). (b) We manually connect diiodine-based building blocks (see Fig. S8) solely through XBs, without employing the knowledge of the tiling theory. Nine configurations (see Fig. S9) are obtained, and two of them ("T4" and "T4+T4m", as detailed in SM [24]) didn't appear before. However, both "T4" and "T4+T4m" exhibit dynamic instability (Fig. S10). This result may be attributed to the violation of the requirement for all iodine atoms to be equivalent. (c) Additionally, a structure search using the particle swarm optimization (PSO) method [33] is conducted (see Fig. S19). The PSO search identifies the emergence of herringbone and Pythagorean iodinenes in our design. However, several low-energy configurations, such as GTH and GH iodinenes, are not detected. The contrast highlights the distinct effectiveness of our design.
Formation energy (\(\Delta\)E, meV/molecule) of all configurations derived is listed in Table. S2, with the energy of gas diiodine being the reference zero, where both M06-L+D3 and PBE functionals are used. Under the M06-L+D3 scheme, herringbone is the most low-energetic iodinene structure (\(\Delta\)E = -509), followed by Pythagorean (\(\Delta\)E = -457), "T4+T4m", TDT, GH, "T4" and GTH. These seven configurations consist of halogen-bonded iodine molecules and possess the lowest energy exactly. Conversely, high-energetic configurations are comprised of iodine molecules without XB nets, like ITSS (isosceles triangle snub square) and OR (octagon rhombus), or manifest as covalent crystals (refer to Fig. S20). These outcomes collectively demonstrate the genuine inclination of monolayer iodinenes toward arrangements formed through halogen-bonded iodine molecules, thereby substantiating the value and validity of our devised approach.
_Band topology in monolayer iodinenes.-_ We find herringbone, Pythagorean, GTH/diatomic-kagome and GH iodinenes are higher-order topological insulators. Due to both inversion symmetry and time-reversal symmetry are persevered in these four configurations, their higher-order topology can be characterized by the second Stiefel-Whitney number \(w_{2}\)[34]. Here, two methods are taken into consideration to prove their higher-order band topology: the parity criterion and the Wilson loop method. For the parity criterion, \(w_{2}\) can be calculated by the parity eigenvalues of the valence bands,
\[(-1)^{w_{2}}=\Pi_{i=1}^{d}(-1)^{[N_{\text{occ}}^{-}(\Gamma_{i})/2]}, \tag{1}\]
where \(\Gamma_{i}\) are the four time-reversal invariant momentums (TRIMs), \(N_{\text{occ}}^{-}\) denotes the number of valence bands with odd parity and \([\cdots]\) is the floor function. Taking herringbone iodinene as an example, we find that \(N_{\text{occ}}^{-}(\Gamma_{i})\) are 6, 8, 7, and 7 for \(\Gamma\), \(M\), \(X\) and \(Y\), respectively, leading to a nontrivial second Stiefel-Whitney number \(w_{2}=1\). This indicates that the herringbone iodinenes is a higher-order topological insulator. The nontrivial \(w_{2}\) can also be evidenced by the Wilson loop method. In Fig. 3(b), we plot the Wilson loop of the herringbone iodinene along \(k_{110}\) direction. One can find that the Wilson loop has only one crossing point on \(\Theta=\pi\). This again proves the higher-order topology of the herringbone iodinene. Therefore, the herringbone iodinene would have 0D-protected corner states. This is confirmed by our numerical calculation. The numerical result is plotted in Fig. 3(c), from which two corner states in the band gap can be clearly observed. The charge density distributions of corner states in real space are presented in Fig. 3(d). The other three kinds of iodinenes are also calculated as second Stiefel-Whitney insulators with nontrivial \(w_{2}\) (see Fig. S16-18 and Table. S3).
Spin-orbit coupling (SOC) induced band topology is also studied considering the iodine element possesses a non-negligible SOC strength. As for the cases of Pythagorean, GTH/diatomic-kagome, and GH iodinenes, band crossings between the highest valence band and its lower band are all opened by SOC, which induces band inversions (see Fig. 4). And the same behavior occurs between the lowest conduction band and its upper band. For spatial inversion invariant systems [35], we can calculate the topological Z\({}_{2}\) invariant by counting the parity of the occupied states and find that these iodinenes are 2D topological insulators if the Fermi energy is shifted above the lowest unoccupied band or below highest occupied band. As shown in Fig. 4(b), the gapless helical edge states connect the bulk states of GTH/diatomic-kagome iodinene, corresponding to the Z\({}_{2}\) values labeled in Fig. 4(a), which originates from the bulk-boundary correspondence. Considering 2D materials are fabricated generally on substrates, the Fermi level could be shifted by choosing an appropriate substrate or doping. Hence these iodinenes could serve as a potential topological insulator.
_Flat bands and optical absorption.-_ As a noncovalent bond, XB could result in flat band structures in iodinenes because
Figure 3: (a) Band structures of monolayer herringbone iodinene. (b) The Wilson loop along \(k_{11}\) direction. Insets represent the enlarged Wilson loop around \(\Theta=\pi\), indicating the nontrivial higher-order topology with the nonzero second Stiefel-Whitney number. (c) Energy spectra in real space, with the energy of the two corner states highlighted in red. (d) The distribution of the two corner states (depicted in red) in real space.
the relatively weak interactions between iodine molecules, and the flat band would bring about some novel properties. The band structures of herringbone iodineene are depicted in Fig. S11(a), where a quasi-direct band gap (1.165 eV) emerges between the conduction band minima (CBM) at \(\Gamma\) and valence band maxima (VBM) near \(\Gamma\). Notably, the highest occupied band from \(\Gamma\) to VBM is almost flat with an energy shift of about only 8 meV (31 meV for HSE06), so it is a direct gap semiconductor approximately. The in-plane absorption coefficient curve is presented in Fig. S12(a), with two distinct peaks (around 2.04 and 2.74 eV) exceeding \(3\times 10^{5}\)\(cm^{-1}\) in the visible region. This value is considerably higher than that observed in many other 2D materials and organic perovskite materials (\(10^{4}\sim 10^{5}cm^{-1}\)[36]. As a result, herringbone iodine is promising for the photoelectric conversion of solar energy. As shown in Fig. 4, Pythagorean, GTH, and GH iodinenes are direct-gap semiconductors with the gap being 1.69, 1.42 and 1.67 eV respectively. Notably, GTH iodinene possesses a diatomic-kagome lattice leading to two flat bands [37; 38] around -0.27 and 2.12 eV without SOC and one flat band around 2.0 eV with SOC. If with an appropriate substrate or heterojunction, the flat band is promising for a strong correlation platform. Besides, the computed in-plane optical absorption coefficients for Pythagorean, GTH, and GH iodinenes all exceed \(10^{5}cm^{-1}\) within the visible range (with peaks between \(2\sim 3\) eV, see Fig. S12). These flat bands resulting from XBs and geometry contribute to a high density of states, and are beneficial to strong optical absorption, indicating their potential applications in photoelectronic devices.
_Conclusion and discussion._ -- An innovative approach to the design of monolayer iodinenes has been put forward, involving the mapping of unique non-edge-to-edge tilings onto iodinene structures that consist of halogen-bonded iodine molecules. The effectiveness of this method has been validated through first-principles calculations, including the identification of four dynamically stable structures: herringbone, Pythagorean, GTH/diatomic-kagome, and GH. These iodinenes are confirmed as higher-order topological insulators with distinguishable corner states. And flat bands emerge, causing exceptional optical absorption within the visible light range. Furthermore, these iodinenes exhibit direct or quasi-direct band gaps, holding potential for optoelectronic application. Moreover, Pythagorean, GTH/diatomic-kagome, and GH iodinenes showcase nontrivial \(\mathrm{Z}_{2}\) band topology with appropriate doping. Notably, traditional approaches, including PSO searching, yield many structures devoid of halogen bonds. Those structures exhibit dynamic instability and higher energies than iodinenes formed by halogen-bonded iodine molecules. This outcome underscores the preference for iodinenes constructed from halogen-bonded iodine molecules, further substantiating the rationality of our design approach, and bringing about a new sight for the structural composition of 2D materials.
Different from other Xenes as covalent crystals, iodinene belongs to a new category of 2D crystals where XBs dominate. As XBs have extensive applications in crystal engineering [39; 40], catalysis [41], supramolecular chemistry [42; 43], biomolecular systems [44], self-assembly [45; 46] and drug design [47], etc. already, monolayer iodinenes including herringbone, Pythagorean, GTH/diatomic-kagome and GH patterns would provide a new platform to explore more innovations.
_Note added._ We become aware of an independent work recently [48]. The work also studies the 2D sheet of iodine.
_Acknowledgements._ -- We would like to thank Jin Cao, Chaoxi Cui, Xiaodong Zhou, Xiuxian Yang, and Liping Liu for their helpful discussions. The work is supported by the National Key R&D Program of China (Grant No. 2020YFA0308800) and the National Natural Science Foundation of China (Grant Nos. 12374055, 12204330).
|
2309.16397 | Uncertainty-Aware Decision Transformer for Stochastic Driving
Environments | Offline Reinforcement Learning (RL) enables policy learning without active
interactions, making it especially appealing for self-driving tasks. Recent
successes of Transformers inspire casting offline RL as sequence modeling,
which, however, fails in stochastic environments with incorrect assumptions
that identical actions can consistently achieve the same goal. In this paper,
we introduce an UNcertainty-awaRE deciSion Transformer (UNREST) for planning in
stochastic driving environments without introducing additional transition or
complex generative models. Specifically, UNREST estimates uncertainties by
conditional mutual information between transitions and returns. Discovering
'uncertainty accumulation' and 'temporal locality' properties of driving
environments, we replace the global returns in decision transformers with
truncated returns less affected by environments to learn from actual outcomes
of actions rather than environment transitions. We also dynamically evaluate
uncertainty at inference for cautious planning. Extensive experiments
demonstrate UNREST's superior performance in various driving scenarios and the
power of our uncertainty estimation strategy. | Zenan Li, Fan Nie, Qiao Sun, Fang Da, Hang Zhao | 2023-09-28T12:44:51Z | http://arxiv.org/abs/2309.16397v3 | # Uncertainty-Aware Decision Transformer for Stochastic Driving Environments
###### Abstract
Offline Reinforcement Learning (RL) has emerged as a promising framework for learning policies without active interactions, making it especially appealing for autonomous driving tasks. Recent successes of Transformers inspire casting offline RL as sequence modeling, which performs well in long-horizon tasks. However, they are overly optimistic in stochastic environments with incorrect assumptions that the same goal can be consistently achieved by identical actions. In this paper, we introduce an **U**N**ceratently-awa**RE** deci**Sion **T**ransformer (UNREST) for planning in stochastic driving environments without introducing additional transition or complex generative models. Specifically, UNREST estimates state uncertainties by the conditional mutual information between transitions and returns, and segments sequences accordingly. Discovering the 'uncertainty accumulation' and 'temporal locality' properties of driving environments, UNREST replaces the global returns in decision transformers with less uncertain truncated returns, to learn from true outcomes of agent actions rather than environment transitions. We also dynamically evaluate environmental uncertainty during inference for cautious planning. Extensive experimental results demonstrate UNREST's superior performance in various driving scenarios and the power of our uncertainty estimation strategy.
## 1 Introduction
Safe and efficient motion planning has been recognized as a crucial component and the bottleneck in autonomous driving systems (Yurtsever et al., 2020). Nowadays, learning-based planning algorithms like imitation learning (IL) (Bansal et al., 2018; Zeng et al., 2019) and reinforcement learning (RL) (Chen et al., 2019; Chen et al., 2020) have gained prominence with the advent of intelligent simulators (Dosovitskiy et al., 2017; Sun et al., 2022) and large-scale datasets (Caesar et al., 2021). Building on these, offline RL (Diehl et al., 2021; Li et al., 2022) becomes a promising framework for safety-critical driving tasks to learn policies from offline data while retaining the ability to leverage and improve over data of various quality (Fujimoto et al., 2019; Kumar et al., 2020).
Nevertheless, the application of offline RL approaches still faces practical challenges. Specifically: (1) The driving task requires conducting _long-horizon planning_ to avoid shortsighted erroneous decisions (Zhang et al., 2022); (2) The _stochasticity_ of environmental objects during diving also demands real-time responses to their actions (Diehl et al., 2021; Villaflor et al., 2022).
The recent success of the Transformer architecture (Vaswani et al., 2017; Brown et al., 2020; Dosovitskiy et al., 2020) has inspired researchers to reformulate offline RL as a sequence modeling problem (Chen et al., 2021), which naturally considers outcomes of multi-step decision-making and has demonstrated efficacy in long-horizon tasks. Typically, they train models to predict an action based on the current state and an outcome such as a desired future return. However, existing works (Brandfonbrener et al., 2022; Paster et al., 2022; Yang et al., 2022) have pointed out that these decision transformers (DTs) tend to be overly optimistic in stochastic environments because they incorrectly assume that actions, which successfully achieve a goal once, can consistently do so in
subsequent attempts. This assumption is easily broken in stochastic environments, as the goal can be achieved by accidental environment transitions. For example, in Fig. 1(a), identical turning actions may yield entirely different outcomes w.r.t. the behavior of the surrounding vehicle.
The key to overcoming the problem is to distinguish between outcomes of agent actions and environment transitions, and train models to pursue goals that are not affected by environmental stochasticity. To the best of our knowledge, only a handful of works (Villarloor et al., 2022; Paster et al., 2022; Yang et al., 2022) has attempted to solve the problem. Generally, they fit a state transition model, either for pessimistic planning (Villarloor et al., 2022) through sampling from VAEs (Zhang et al., 2018), or to disentangle the conditioning goal from environmental stochasticity (Paster et al., 2022; Yang et al., 2022) by adversarial training (Shafahi et al., 2019). While introducing additional complexity, these methods are only applicable when a highly representative environment model can be learned, which is often not the case for driving tasks with complex interactions (Sun et al., 2022). Furthermore, the final driving trajectory encompasses stochastic impact over an excessive number of timesteps. This dilutes the information related to current timestep decision-making in the outcome return and poses challenges for VAE and adversarial training (Wu et al., 2017; Burgess et al., 2018).
In this paper, we take an initial step to customize DTs for stochastic driving environments without introducing transition or complex generative models. Specifically, our insight comes from Fig. 1(a): During the straight driving, the _global return_ (Task I+II) contains too much stochastic influence to provide effective supervision. However, we can utilize the _truncated return_ solely from the Task I as a condition, which mitigates the impact of environmental stochasticity (less stochastic timesteps, lower return variance) and considers rewards over a sufficient number of timesteps for optimizing current actions. Additional experimental results further validate the point and we summarize the following properties of driving environments, which may also hold in other stochastic environments.
**Property 1** (Uncertainty Accumulation): _The impact of environmental stochasticity on the return distribution accumulates while considering more timesteps, as validated in Fig. 1(b)._
**Property 2** (Temporal Locality): _Driving can be divided into independent tasks, where we only need to focus on the current task without considering those much later. Hence, optimizing future returns over a sufficiently long horizon approximates global return optimization, as shown in Fig. 1(c)._
Based on these, the remaining problem is how to set the span of truncated return so that it can minimize the impact of environmental stochasticity. Specifically, our proposed \(\overline{\text{UN}}\)certainty-\(\text{awa}\)\(\underline{\text{RE}}\)\(\underline{\text{Sion}}\)\(\underline{\text{T}}\)ransformer (UNREST) quantifies the impacts of transition uncertainties by the conditional mutual information between transitions and returns, and segment sequences into _certain and uncertain parts_ accordingly. With the minimal impact of stochasticity in 'certain parts', we set the conditioning goal as the cumulative reward to the segmented position (with the number of timesteps), which can reflect the true outcomes of action selection and be generalized to attain higher rewards. In contrast, in 'uncertain parts' where the environment is highly stochastic, we disregard the erroneous information from returns (with dummy tokens as conditions) and let UNREST imitate expert actions. Dynamic uncertainty evaluation is introduced during inference for cautious planning. **The highlights are:**
* We present UNREST, an uncertainty-aware sequential decision framework to apply offline RL in long-horizon and stochastic driving environments. The code will be public when published.
* Recognizing the properties of driving environments, we propose a novel model-free environmental uncertainty measurement and segment sequences accordingly. Based on these, UNREST bypasses
Figure 1: Motivations of UNREST. (a): An example driving scenario where the return uncertainty increases when accounting for multiple tasks. (b): Calibration results of return distribution over future 1,000 steps are obviously more uncertain than that over future 100 steps. (c): Rollout return and distance curves are close for policies that maximize the return of future 100, 500, and 1,000 steps.
challenging generative training in previous works, by replacing global returns in DTs with less uncertain truncated returns (or dummy tokens) to learn from the true outcomes of agent actions.
* We extensively evaluate UNREST on CARLA (Dosovitskiy et al., 2017), where it consistently outperforms strong baselines (5.2% and 6.5% driving score improvement in seen and unseen scenarios). Additional experiments also prove the efficacy of our uncertainty estimation strategy.
## 2 Related Works
In this section, we review works about sequence modeling algorithms in RL and uncertainty estimation strategies as the foundation of our work and highlight the differences and contributions of UNREST. More related works on vehicle planning can be found in App. A.1.
**Offline RL as Sequence Modeling:** Despite the potential to learn directly from offline data and the prospects of a higher performance ceiling (than IL) (Fujimoto et al., 2019), the long-horizon and stochastic nature of driving environments still undermine the efficacy of current offline RL applications in autonomous driving tasks (Diehl et al., 2021; Shi et al., 2021; Li et al., 2022). Encouraged by the success of Transformer (Vaswani et al., 2017) in sequence modeling tasks (Brown et al., 2020; OpenAI, 2023), a recent line of work (Chen et al., 2021; Janner et al., 2021; Furuta et al., 2022; Lee et al., 2022; Hu et al., 2023) adapts Transformer into RL by viewing offline RL as a sequence modeling problem. Typically, Decision Transformer (DT) (Chen et al., 2021) predicts actions by feeding target returns and history sequences, while Trajectory Transformer (TT) (Janner et al., 2021) further exploits the capacity of Transformer through jointly predicting states, actions, and rewards and planning by beam searching. The theoretical basis of these models is revealed in Generalized DT (Furuta et al., 2022): they are trained conditioned on hindsight (e.g. future returns) to reach desired goals. DTs naturally consider outcomes of multi-step decision-making and have demonstrated efficacy in long-horizon tasks (Chen et al., 2021).
However, recent works (Brandfonbrener et al., 2022; Paster et al., 2022; Yang et al., 2022) have pointed out the fundamental problem with this training mechanism. Specifically, in stochastic environments like autonomous driving, certain outcomes can be achieved by accidental environment transitions, hence cannot provide effective action supervision. Notably, this is in fact a more general problem to do with all goal-conditioned behavior cloning and hindsight relabeling algorithms (Eysenbach et al., 2022; Strupl et al., 2022; Paster et al., 2020), but in this work we specifically focus on solutions to DTs. To tackle this issue, ESPER (Paster et al., 2022) adopts adversarial training (Shafai et al., 2019) to learn returns disentangled from environmental stochasticity as a condition. Similarly, DoC (Yang et al., 2022) utilizes variational inference (Zhang et al., 2018) to learn a latent representation of the trajectory, which simultaneously minimizes the mutual information (so disentangled) with environment transitions to serve as the conditioning target. Besides, SPLT (Villafor et al., 2022) leverages a conditional VAE (Zhang et al., 2018) to model the stochastic environment transitions. As the driving environment contains various interactions that make it difficult to model (Sun et al., 2022), and the long driving trajectory hinders generative training, our study proposes a novel uncertainty estimation strategy to customize DTs without transition or generative models. Besides, different from EDT (Wu et al., 2023) that dynamically adjusts history length to improve stitching ability, UNREST focuses on segmenting sequences and replacing conditions to address the overly optimistic problem.
**Uncertainty Estimation:** Uncertainty estimation plays a pivotal role in reliable AI. One typical method for uncertainty estimation is probabilistic bayesian approximation, either in the form of dropout (Gal and Ghahramani, 2016) or conditional VAEs (Blundell et al., 2015; Dusenberry et al., 2020), which computes the posterior distribution of model parameters. On the contrary, the deep deterministic methods propose to estimate uncertainty by exploiting the implicit feature density (Franchi et al., 2022). Besides, deep ensembles (Lakshminarayanan et al., 2017) train \(K\) neural networks simultaneously, with each module being trained on a different subset of the data, and use the variance of the network outputs as a measure of uncertainty. In this work, we utilize the ensemble approach, which has been widely used in the literature for trajectory optimization (Vlastelica et al., 2021), online and offline RL (Wu et al., 2021; An et al., 2021), to jointly train \(K\) variance networks (Kendall and Gal, 2017) for estimating the uncertainty of returns.
## 3 Preliminary
We introduce preliminary knowledge for UNREST in this section. To keep notations concise, we use subscripts \(t\) or numbers for variables at specific timesteps, Greek letter subscripts for parameterized variables, and bold symbols to denote variables spanning multiple timesteps.
### Online and Offline Reinforcement Learning
We consider learning in a Markov decision process (MDP) (Puterman, 2014) denoted by the tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},P,r,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the state space and the action space, respectively. Given states \(s,s^{\prime}\in\mathcal{S}\) and action \(a\in\mathcal{A}\), \(P(s^{\prime}|s,a):\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to[0,1]\) is the transition probability distribution and \(r(s,a):\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) defines the reward function. Besides, \(\gamma\in(0,1]\) is the discount factor. The agent takes action \(a\) at state \(s\) according to its policy \(\pi(a|s):\mathcal{S}\times\mathcal{A}\to[0,1]\). At timestep \(t\in[1,T]\), the accumulative discounted reward in the future, named reward-to-go (i.e. return), is \(R_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r_{t^{\prime}}\). The goal of _online RL_ is to find a policy \(\pi\) that maximizes the total expected return: \(J=\mathbb{E}_{q_{t}=\pi(\cdot|s_{t}),s_{t+1}=P(\cdot|s_{t},a_{t})}\big{[}\sum_ {t=1}^{T}\gamma^{t-1}r(s_{t},a_{t})\big{]}\) by learning from the transitions \((s,a,r,s^{\prime})\) through interacting with the real environment. In contrast, _Offline RL_ makes use of a static dataset with \(N\) trajectories \(\mathcal{D}=\{\pi_{t}\}_{t=1}^{N}\) collected by certain behavior policy \(\pi_{b}\) to learn a policy \(\pi\) that maximizes \(J\), thereby avoiding safety issues during online interaction. Here \(\mathbf{\tau}=\big{\{}(s_{t},a_{t},r_{t},s^{\prime}_{t})\big{\}}_{t=1}^{T}\) is a collected interaction trajectory composed of transitions with horizon \(T\).
### Offline Reinforcement Learning as Sequence Modeling
Following DTs (Chen et al., 2021), we pose offline RL as a sequence modeling problem where we model the probability of the sequence token \(x_{t}\) conditioned on all tokens prior to it: \(p_{\theta}(x_{t}|\mathbf{x}_{<t})\), where \(\mathbf{x}_{<t}\) denotes tokens from step 1 to \((t-1)\), like the GPT (Brown et al., 2020) architecture. DTs consider a return-conditioned policy learning setting where the agent at step \(t\) receives an environment state \(s_{t}\), and chooses an action \(a_{t}\) conditioned on the future return \(R_{t}=\sum_{t^{\prime}=t}^{T}r_{t^{\prime}}\). This leads to the following trajectory representation:
\[\mathbf{\tau}=(R_{1},s_{1},a_{1},R_{2},s_{2},a_{2},...,R_{T},s_{T},a_{T}), \tag{1}\]
with the objective to minimize the action prediction loss, i.e. maximize the action log-likelihood:
\[\mathcal{L}_{\text{DT}}(\theta)=\mathbb{E}_{\mathbf{\tau}\sim\mathcal{D}}\big{[}- \sum_{t=1}^{T}\log p_{\theta}(a_{t}|\mathbf{\tau}_{<t},R_{t},s_{t})\big{]}. \tag{2}\]
This supervised learning objective is the cause for DTs' limitations in stochastic environments, as it over-optimistically assumes actions in the sequence can reliably achieve the corresponding returns. At inference time, given a prescribed high target return, DTs generate actions autoregressively while receiving new states and rewards to update the history trajectory.
## 4 Approach: UnREST
In this section, we begin with an overview of UNREST's insight and architecture, followed by detailed explanations of the composition modules used in the approach.
### Model Overview
An overview of the proposed approach UNREST is illustrated in Fig. 2. Specifically, to address the overly optimistic issue, our key idea is to quantify the impact of environmental uncertainty, and learn to perform aggressively or cautiously at states with different levels of environmental impacts.
To achieve this, we first train _return transformers_ with different trajectory inputs to identify the impact of environmental uncertainty. The expert sequences are then segmented into certain and uncertain parts w.r.t. estimated uncertainties, each with relabeled conditioning goals to facilitate the _decision transformer_ to learn from outcomes of agent actions rather than environment transitions. At test time, an _uncertainty predictor_ is introduced to guide decision-making at different states. In the following, we introduce each module with more details.
### Return Transformers for Uncertainty Estimation
Instead of conventional uncertainties that reflect variances of distributions (Wu et al., 2021; Kendall and Gal, 2017), in this paper we estimate the impact of environmental stochasticity, what we really care about for policy learning, as an indirect measure of environmental uncertainty. Specifically, we propose to model the impact of the transition \((s_{t-1},a_{t-1}\to s_{t})\) on return \(R_{t}\) through their conditional mutual information (Seitzer et al., 2021; Duong and Nguyen, 2022), i.e. the divergence between distributions \(P(R_{t}|\mathbf{\tau}_{<t}^{\text{ret}})\) and \(P(R_{t}|\mathbf{\tau}_{<t}^{\text{ret}},s_{t})\), where \(\mathbf{\tau}^{\text{ret}}=\{(s_{t},a_{t})\}_{t=1}^{T}\).
Specifically, we train two'return transformers' to approximate future return distributions, in which states and actions are first embedded by a linear layer \(f_{\psi}(\cdot)\), then fed into the transformer:
\[\begin{split} x_{s_{t}}=f_{\psi}^{s}(s_{t}),\quad x_{a_{t}}=f_{\psi }^{a}(a_{t}),\\...,\tilde{x}_{s_{t-1}},\tilde{x}_{a_{t-1}},\tilde{x}_{s_{t}}, \tilde{x}_{a_{t}}=&\text{Transformer}(...,x_{s_{t-1}},x_{a_{t-1}},x_{s_{t}},x_{s_{t}}).\end{split} \tag{3}\]
Afterward, two models feed their respective outputs into variance networks (Kendall & Gal, 2017) to predict Gaussian distributions \(\mathcal{N}(\cdot)\) of returns, which are optimized by maximizing log-likelihoods:
\[\begin{split} p_{\psi_{a}}(R_{t}|\tau_{<t}^{\text{ret}})=\mathcal{ N}\big{(}\mu_{\psi_{a}}(\tilde{x}_{a_{t-1}}),\sigma_{\psi_{a}}(\tilde{x}_{a_{t-1}}) \big{)},\,\mathcal{L}_{\text{return}}(\varphi_{a})=\mathbb{E}_{\tau\sim \mathcal{D}}\big{[}-\sum_{t=1}^{T}\log p_{\varphi_{a}}(R_{t}|\tau_{<t}^{\text{ ret}})\big{]},\\ p_{\psi_{a}}(R_{t}|\tau_{<t}^{\text{ret}},s_{t})=\mathcal{N} \big{(}\mu_{\psi_{a}}(\tilde{x}_{s_{t}}),\sigma_{\psi_{a}}(\tilde{x}_{s_{t}}) \big{)},\,\,\mathcal{L}_{\text{return}}(\varphi_{s})=\mathbb{E}_{\tau\sim \mathcal{D}}\big{[}-\sum_{t=1}^{T}\log p_{\varphi_{a}}(R_{t}|\tau_{<t}^{ \text{ret}},s_{t})\big{]}.\end{split} \tag{4}\]
Practically, the networks are implemented as ensembles, which together form Gaussian Mixture Models (GMM) (Mai et al., 2022) to better capture the return distribution as in App. A.2. Finally, the impact of the stochastic environmental transition (i.e. the conditional mutual information) is calculated through the Kullback-Leibler (KL) divergence (Joyce, 2011) between these two distributions, as a means to measure the environmental uncertainty at timestep \(t\):
\[u_{t}=D_{\text{KL}}\big{(}p_{\varphi_{a}}(p_{t}|\tau_{<t}^{\text{ret}}),\,p_{ \varphi_{s}}(R_{t}|\tau_{<t}^{\text{ret}},s_{t})\big{)}, \tag{5}\]
where a larger divergence implies more influence on the return from the transition \((s_{t-1},a_{t-1}\to s_{t})\).
### Decision Transformer Trained on Segmented Sequences
In this section, we step to aid DT training in stochastic driving environments. As discussed in Sec. 4.1, we expect UNREST to segment sequences into certain and uncertain parts according to uncertainty estimations and learn to perform aggressively or cautiously within them, respectively. To achieve this, we propose to replace the conditioning global returns with truncated returns in 'certain parts', which are less affected by the environment due to uncertainty accumulation (Prop. 1), thus reliably helping the planner generalize to higher returns after training. Otherwise, in 'uncertain parts', the seemingly high return actions may easily cause safety issues due to environmental uncertainty, thus we simply ignore the stochastic returns, setting conditions as dummy tokens for behavior cloning.
**Segmentation strategy** is therefore a crucial point to reinvent DTs. To distinguish between different levels of environmental stochasticity, we define an uncertainty threshold \(\epsilon\). Next, we estimate uncertainties \(u_{t}\) by Eq. 5 for each timestep and record those larger than \(\epsilon\) as uncertain. The sequence is then divided into certain and uncertain parts according to these marked uncertain timesteps.
Figure 2: Overview of UNREST. **Lower:** Two return prediction transformers are trained for uncertainty estimation. The sequence is then segmented into certain (no background) and uncertain (orange background) parts w.r.t. estimated uncertainties, with ‘certain parts’ conditioned on returns to the next segmentation positions, and dummy tokens in ‘uncertain parts’. **Upper:** The same architecture as DTs is used for action prediction, except that we add a return-span embedding on the truncated return embedding, and concatenate the discretized global return embedding on the transformer output.
An intuitive example of our segmentation strategy is illustrated in the lower part of Fig. 2. Specifically, we define the 'certain part' to begin with a timestep with uncertainty lower than \(\epsilon\), and continue until being segmented at an uncertain timestep. Afterward, the 'uncertain part' starts with the newly encountered uncertain timestep and continues to include subsequent timesteps until the last \(c-1\) timesteps are all identified as certain. Since uncertain timesteps often occur intensively over a particular duration during driving (e.g. at an intersection), the hyperparameter \(c\) is introduced to ensure a minimum length of segmented sequence and avoid frequent switching between two parts of sequences. Finally, we represent the segmented sequence as:
\[\mathbf{\tau}^{\text{seg}}=(h_{1},R_{1}^{h},s_{1},a_{1},h_{2},R_{2}^{h},s_{2},a_{2 },...,h_{T},R_{T}^{h},s_{T},a_{T}), \tag{6}\]
where the conditioning return is modified as \(R_{t}^{h}=\sum_{k=0}^{h_{t}-1}r_{t+k}\), which only considers rewards in next \(h_{t}\) steps (called the return-span). In 'uncertain parts', \(h\) is set as empty for meaningless conditions (i.e. the dummy tokens), leading UNREST to ignore the conditioned targets and cautiously imitate expert actions, instead of being misguided by the uncertain return information. In 'certain parts', the return-span \(h\) is set to the number of timesteps to the next segmentation step, so that the conditioning return \(R^{h}\) takes into account the maximum duration that does not include any uncertain timesteps.
**Proposition 1** (UNREST Alignment Bound): _Assuming that the rewards obtained are determined by transitions \((s,a\to s^{\prime})\) at each timestep and UNREST is perfectly trained to fit the expert demonstrations, then the discrepency between target truncated returns and URNREST's rollout returns is bounded by a factor of environment determinism and data coverage._
The above proposition reveals that under natural assumptions (transition-reward correspondence) in driving environments, UNREST can generalize to achieve (bounded) high returns at states with minimal environmental stochasticity if expert demonstrations cover corresponding actions. This supports our segmentation and condition design. The proof of the proposition are left to App. B.
Notably, the accounted timestep number is variable for truncated returns at different states, which necessitates the return-span as a condition to provide information about the number of timesteps. This enables UNREST to learn to achieve return \(R_{t}^{h}\) over future \(h_{t}\) timesteps. Otherwise, the model may be confused by the substantial differences in the magnitude of return conditions (with varying timestep lengths) at similar states.
**Policy formulation:** Unlike return transformers, DT takes the segmented sequence \(\mathbf{\tau}^{\text{seg}}\) as input:
\[\begin{split}& x_{R_{t}^{h}}=f_{\theta}^{R^{h}}(R_{t}^{h})+f_{ \theta}^{h}(h_{t}),\quad x_{s_{t}}=f_{\theta}^{s}(s_{t}),\quad x_{a_{t}}=f_{ \theta}^{a}(a_{t}),\\ &...,\tilde{x}_{R_{t-1}^{h}},\tilde{x}_{s_{t-1}},\tilde{x}_{s_{t -1}},\tilde{x}_{s_{t-1}},\tilde{x}_{R_{t}^{h}},\tilde{x}_{s_{t}},\tilde{x}_{a_ {t}}=\text{Transformer}(...,x_{R_{t-1}^{h}},x_{s_{t-1}},x_{s_{t-1}},x_{s_{t -1}},x_{R_{t}^{h}},x_{s_{t}},x_{a_{t}}),\end{split} \tag{7}\]
where a _return-span embedding_\(f_{\theta}^{h}(h_{t})\) is added to the return embedding to provide information about return-spans, like the drop-span embedding in (Hu et al., 2023). Besides, to get the action distribution, the non-truncated _global-return embedding_\(f^{R}(R_{t})\) is optionally added to the output \(\tilde{x}_{s_{t}}\) of Transformer to provide additional longer horizon guidance for planning. Using \([\cdot||\cdot]\) to denote the concatenation of two vectors along the last dimension, the final predicted action distribution is:
\[\pi_{\theta}(a_{t}|\mathbf{\tau}^{\text{seg}}_{<t},h_{t},R_{t},R_{t}^{h},s_{t})= \mathcal{N}\big{(}\mu_{\theta}([\tilde{x}_{s_{t}}\;||\;f^{R}(R_{t})]),\sigma_ {\theta}([\tilde{x}_{s_{t}}\;||\;f^{R}(R_{t})])\big{)}. \tag{8}\]
The learning objective can be directly modified from Eq. 2 in the original DT, and UNREST's training process is summarized in App. C:
\[\mathcal{L}_{\text{UNREST}}(\theta)=\mathbb{E}_{\mathbf{\tau}^{\text{seg}}\sim \mathcal{D}}\big{[}-\sum_{t=1}^{T}\log\pi_{\theta}(a_{t}|\mathbf{\tau}^{\text{seg }}_{<t},h_{t},R_{t},R_{t}^{h},s_{t})\big{]}. \tag{9}\]
### Uncertainty-guided Planning
To account for environmental stochasticity at inference time, we introduce a lightweight uncertainty prediction model \(u_{\theta}(\cdot)\) to predict environmental stochasticity in real-time. For instance, it can be implemented as a neural network or just a heuristic value like that in Tab. 8 and Tab. 9. Practically, we choose the KD-Tree (Redmond & Heneghan, 2007) for its high computational efficiency and favorable estimation performance, with states as tree nodes and uncertainties (i.e. impacts of environmental stochasticity) estimated by return transformers as node values. At each timestep we will query the predictor and if the current state transition is highly uncertain (e.g., encountering a traffic light), we set
the conditioning target to a dummy token to conduct cautious planning, consistent with the training process. Otherwise at states with certain transitions, we act aggressively to attain high rewards.
Different from the conventional planning procedure of DTs, as for UNREST, we need to specify not only the target global return \(R_{1}\), but also the truncated return \(R_{1}^{h}\) and the initial return-span \(h_{1}=H\). After segmentation, the effective planning horizon of the trained sequences is reduced to the return-span \(h_{t}\). Once \(h_{t}\) reaches 1, we need to reset the target return and the return-span. Practically, we simply reset \(h_{t}\) to a fixed return horizon \(H\). Besides, we train a return prediction model \(R_{\theta}^{h}(\cdot)\) similar to that defined in Eq. 4 and take the upper percentile \(\eta\) of the predicted distribution as the new target return to attain. The hyperparameter \(\eta\) can be tuned for a high target return, and we do not need to consider targets at 'uncertain states' since they are just set as dummy tokens. The complete planning process is summarized in Alg. 1.
## 5 Experiments
In this section, we conduct extensive experiments to answer the following questions. Q1: How does UNREST perform in different driving scenarios? Q2: How do components of UNREST influence its overall performance? Q3: Does our uncertainty estimation possess interpretability?
### Experiment Setup
In this section, we briefly describe the setup components of our experiments. Please find more details of model implementation, training, and evaluation processes in App. E.
**Datasets:** The offline dataset \(\mathcal{D}\) is collected from the CARLA simulator (Dosovitskiy et al., 2017) with its built-in Autopilot. Specifically, we collect 30 hours of data from 4 towns (Town01, Town03, Town04, Town06) under 4 weather conditions (ClearNoon, WetNoon, HardRainNoon, ClearSunset), saving tuple \((s_{t},a_{t},r_{t})\) at each timestep. More details about the state, action compositions, and our reward definitions are left to App. D.
**Metrics:** We evaluate models at training and new driving scenarios and report metrics from the CARLA challenge (Dosovitskiy et al., 2017; team, 2020) to measure planners' driving performance: infraction score, route completion, success rate, and driving score. Besides, as done in (Hu et al., 2022), we also report the normalized reward (the ratio of total return to the number of timesteps) to reflect driving performance at timestep level. Among them, the driving score is the most significant metric that accounts for various indicators like driving efficiency, safety, and comfort.
**Baselines:** First, we choose two IL baselines: vanilla Behavior Cloning (BC) and Monotonic Advantage Re-Weighted Imitation Learning (MARWIL) (Wang et al., 2018). Apart from IL baselines, we also include state-of-the-art offline RL baselines: Conservative Q-Learning (CQL) (Kumar et al., 2020), Implict Q-Learning (IQL) (Kostrikov et al., 2021). Constraints-Penalized Q-Learning (CPQ) (Xu et al., 2022) is chosen as a safe (cautious) offline RL baseline. Besides, we select two classic Transformer-based offline RL algorithms: Decision Transformer (DT) (Chen et al., 2021) and Trajectory Transformer (TT) (Janner et al., 2021) as baselines. Finally, we adopt three algorithms as rigorous baselines: Separated Latent Trajectory Transformer (SPLT) (Villaffor et al., 2022), Environment-Stochasticity-Independent Representations (ESPER) (Paster et al., 2022), and Dichotomy of Control (DoC) (Yang et al., 2022). These algorithms fit state transition models and employ generative training to mitigate DTs' limitations in stochastic environments.
### Driving Performance
Firstly, we implement all the models, then evaluate their performance at training (Town03) and new (Town05) scenarios (Q1), whose results are summarized in Tab. 1.
Analyzing the results, we first notice that DT (Chen et al., 2021) performs worst among all sequence models, with a significant gap compared to the simple offline RL baseline IQL (Kostrikov et al., 2021). In new scenarios, it even shows a similar performance to BC. We attribute this to the uncertainty of global return it conditions on. When the conditioning target does not reflect the true outcomes of agent actions, DT learns to ignore the target return condition and makes decisions solely based on the current state like BC. Furthermore, ESPER (Paster et al., 2022) and DoC (Yang et al., 2022) also perform poorly in both scenarios, which may be a result of ineffective adversarial and VAE training of long-horizon and complex driving demonstrations.
To tackle environmental stochasticity, SPLT learns to predict the worst state transitions and achieves the best infraction score. However, its overly cautious planning process leads it to stand still in many scenarios like Fig. 3(c), resulting in an extremely poor route completion rate and normalized reward. TT instead learns a transition model regardless of environmental stochasticity and behaves aggressively. It attains the highest normalized reward at training scenarios since the metric prioritizes planners that rapidly move forward (but with low cumulative reward because of its short trajectory length caused by frequent infractions). In unseen driving scenarios, TT often misjudges the speed of preceding vehicles, resulting in collisions and lower normalized rewards (than ours) like in Fig. 3(a).
Notably, UNREST achieves the highest driving score, route completion rate, success rate in both seen and unseen scenarios, and highest normalized reward in new scenarios without the need to learn transition or complex generative models. For the driving score, it surpasses the strongest baselines, TT (Janner et al., 2021) over 5% in training scenarios, and SPLT (Villaflor et al., 2022) over 6% in new scenarios (in absolute value), achieving a reasonable balance between aggressive and cautious behavior. For all the metrics, the results demonstrate that UNREST obtains significant improvement in terms of safety, comfort, and efficiency, and effectively increases the success rate of driving tasks. It demonstrates that the truncated returns successfully mitigate impacts of stochasticity and provide effective action supervision. In App. F.2, we test and find that UNREST only occupies slightly more resources than DT, and runs significantly faster while consuming less space than TT and SPLT, which additionally fit state transition models.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{Dosing} & \multicolumn{1}{c}{Success Rate} & \multicolumn{1}{c}{Range Composition} & \multicolumn{1}{c}{Inflation Score} & \multicolumn{1}{c}{Normalized Reward} \\ \hline BC & \(51.9\pm 0.94\) (\(\pm 5.2\)) & \(37.9\pm 4.70\) (\(\pm 3.6\)) & \(79.7\pm 6.0\) (\(\pm 7.1\)) & \(84.5\pm 1.84\) (\(\pm 0.7\)) & \(65.9\pm 0.02\) (\(0.61\pm 0.04\)) \\ MARU (Wang et al., 2018) & \(54.3\pm 0.78\) (\(\pm 3.4\)) & \(34.3\pm 2.22\) (\(\pm 2.2\)) & \(84.1\pm 3.20\) (\(\pm 3.8\)) & \(57.9\pm 0.88\) (\(\pm 4.4\)) & \(60.65\pm 0.02\) (\(0.63\pm 0.01\)) \\ COL (Kumar et al., 2020) & \(55.0\pm 2.04\) (\(\pm 2.8\)) & \(48.6\pm 4.52\) (\(\pm 2.8\)) & \(48.3\pm 4.78\) (\(\pm 3.9\)) & \(52.4\pm 3.07\) (\(\pm 3.6\)) & \(65.08\pm 0.03\) (\(0.62\)) \\ IOI (Kostrikov et al., 2021) & \(55.9\pm 3.52\) (\(\pm 3.5\)) & \(50.2\pm 6.42\) (\(\pm 2.4\)) & \(27.2\pm 4.68\) (\(\pm 4.8\)) & \(64.4\pm 2.62\) (\(\pm 6.6\)) & \(66.08\pm 0.00\) (\(0.01\)) \\ CPQ (Qu et al., 2022) & \(54.7\pm 3.08\) (\(\pm 3.8\)) & \(46.6\pm 3.03\) (\(\pm 3.5\)) & \(57.9\pm 4.24\) (\(\pm 7.08\)) & \(66.9\pm 2.18\) (\(\pm 4.4\)) & \(63.69\pm 0.02\) (\(0.64\pm 0.02\)) \\ \hline DT (Chen et al., 2021) & \(55.2\pm 2.04\) (\(\pm 6.1\)) & \(48.0\pm 4.7\) (\(\pm 6.0\)) & \(62.6\pm 1.01\) (\(\pm 8.1\)) & \(57.4\pm 1.3\) (\(\pm 3.4\)) & \(66.06\pm 0.01\) (\(0.64\pm 0.02\)) \\ TT (Janner et al., 2021) & \(58.3\pm 3.54\) (\(\pm 3.2\)) & \(45.9\pm 3.52\) (\(\pm 2.04\)) & \(74.8\pm 4.55\) (\(\pm 70.2\)) & \(63.6\pm 3.68\) (\(\pm 6.0\)) & \(50.48\pm 0.02\) (\(0.54\pm 0.03\)) \\ SPLT (Villaflor et al., 2022) & \(57.8\pm 4.9\) (\(\pm 6.4\)) & \(21.3\pm 1.9\) (\(\pm 9.5\)) & \(36.7\pm 4.82\) (\(\pm 8.7\)) & \(73.9\pm 4.07\) (\(\pm 8.0\)) & \(35.05\pm 0.03\) (\(0.57\pm 0.05\)) \\ ESPER (Paster et al., 2021) & \(54.8\pm 3.05\) (\(\pm 3.1\)) & \(34.8\pm 4.3\) (\(\pm 3.0\)) & \(22.9\pm 3.4\) (\(\pm 3.4\)) & \(14.1\pm 3.62\) (\(\pm 2.2\)) (\(\pm 3.6\)) & \(60.64\pm 0.01\) (\(0.63\pm 0.02\)) \\ DC (Yang et al., 2022) & \(56.9\pm 1.1\) (\(\pm 4.4\)) & \(42.3\pm 5.2\) (\(\pm 4.2\)) & \(72.9\pm 3.4\) (\(\pm 3.0\)) & \(80.2\pm 3.0\) (\(\pm 3.2\)) & \(60.3\pm 0.24\) (\(0.54\pm 0.03\)) \\ \hline UNREST & \(40.5\pm 3.8\) (\(\pm 2.0\)) & \(48.5\pm 3.02\) (\(\pm 6.0\)) & \(54.8\pm 3.0\) (\(\pm 3.1\)) & \(80.8\pm 3.1\) (\(\pm 3.0\)) (\(\pm 5.0\)) & \(70.2\pm 3.02\) (\(0.9\pm 3.3\)) & \(0.64\pm 0.01\)(\(0.65\pm 0.03\)) \\ \hline ESPER (Dosurdiky et al., 2017) & \(74.0\pm 6.0\) (\(\pm 5.3\)) & \(65.4\pm 8.0\) (\(\pm 6.5\)) & \(81.2\pm 4.6\) (\(\pm 5.8\) (\(\pm 1.1\)) & \(82.8\pm 3.1\) (\(\pm 7.5\)) & \(60.72\pm 0.01\) (\(0.69\pm 0.01\)) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Driving performance on train (new) town and train (new) weather conditions in CARLA. Mean and standard deviation are computed over 3 seeds. All metrics are recorded in percentages (%) except the normalized reward. The best results are in bold and our method is colored in gray.
Figure 3: Typical failing cases of DT and SPLT, where UNREST performs reasonably.
### Ablation Study
The key components of UNREST include global return embedding, return-span embedding, ensemble-based uncertainty estimation, and the uncertainty-guided planning process. In this section, we conduct ablation experiments by separately removing these components to explore their impacts on the overall performance of UNREST (Q2). Results are shown in Tab. 2 and Tab. 3.
The ablation results show pronounced differences, and it is apparent that the elimination of any part of the three components leads to a decline in UNREST's driving score in new scenarios. Among them, the global return embedding has the slightest impact, which suggests that the highly uncertain global return may not provide effective guidance, or that the truncated return is already sufficient for making reasonable decisions (temporal locality, Prop. 2). When the return-span embedding is removed, the absolute driving score drops by about 6%. This implies that the introduction of return-span embedding provides necessary information about the timesteps needed to achieve the target return. Removing the ensemble of return transformers (i.e. the GMM) induces a significant performance drop in both scenarios. It shows that a simple Gaussian distribution cannot well express the return distribution, resulting in poor uncertainty calibration capability (corresponding with results in Fig. 6). Finally, after canceling uncertainty estimation at test time, the driving and infraction scores of UNREST drop dramatically, which proves the importance of cautious planning.
### Uncertainty Visualization
Finally, We verify the interpretability of UNREST's uncertainty estimation through visualizations (Q3). Typically, in Fig. 4(a) we observe an initial increase in uncertainty as the ego-vehicle enters the lane to follow another vehicle, owing to the lack of knowledge about the other vehicle's behavior. This uncertainty gradually decreases back below the threshold when the vehicle stabilizes in the following state. Fig. 4(b) shows a green light crossing scenario. While approaching the green light, the uncertainty about the light state causes the uncertainty to rise quickly. After the vehicle has moved away from the traffic light, the uncertainty immediately drops back below the threshold.
## 6 Conclusion: Summary and Limitations
The paper presents UNREST, an uncertainty-aware decision transformer to apply offline RL in long-horizon and stochastic driving environments. Specifically, we propose a novel uncertainty measurement by computing the divergence of return prediction models. Then based on properties we discover in driving tasks, we segment the sequence w.r.t. estimated uncertainty and adopt truncated returns as conditioning goals. This new condition helps UNREST to learn policies that are less
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Pruner**} & Design & Sensors & Route & Influenza & Non- \\ & Scouvel & Ravel & C1 & Scouvel & Reward \\ \hline Wire global emb. & \(\mathbf{44.5\pm 2.8}\) & \(\mathbf{55.8\pm 0.8}\) & \(\mathbf{50.2\pm 2.4}\) & \(\mathbf{50.0\pm 1.6}\) & \(\mathbf{0.05\pm 0.02}\) \\ Wire-span emb. & \(50.0\pm 1.2\) & \(\mathbf{33.3\pm 3.5}\) & \(\mathbf{55.8\pm 3.4}\) & \(\mathbf{55.8\pm 3.1}\) & \(\mathbf{55.7\pm 0.01}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{51.3\pm 3.5}\) & \(\mathbf{52.6\pm 0.0}\) & \(\mathbf{53.2\pm 1.7}\) & \(\mathbf{57.0\pm 0.02}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{54.5\pm 1.7}\) & \(\mathbf{50.8\pm 3.3}\) & \(\mathbf{70.2\pm 2.8}\) & \(\mathbf{96.4\pm 0.04}\) \\ Pull model & \(\mathbf{50.3\pm 3.2}\) & \(\mathbf{54.5\pm 7.0}\) & \(\mathbf{50.8\pm 3.3}\) & \(\mathbf{70.2\pm 2.8}\) & \(\mathbf{96.4\pm 0.04}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study results for UNREST on train town and train weather conditions.
Figure 4: Visualizations of UNREST’s uncertainty estimation results.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Pruner**} & Design & Sensors & Route & Influenza & Non- \\ & Scouvel & Ravel & C1 & Scouvel & Reward \\ \hline Wire global emb. & \(\mathbf{44.5\pm 2.8}\) & \(\mathbf{55.8\pm 0.8}\) & \(\mathbf{50.2\pm 2.4}\) & \(\mathbf{50.0\pm 1.6}\) & \(\mathbf{0.05\pm 0.02}\) \\ Wire-span emb. & \(50.0\pm 1.2\) & \(\mathbf{33.3\pm 3.5}\) & \(\mathbf{55.8\pm 3.4}\) & \(\mathbf{55.8\pm 3.1}\) & \(\mathbf{55.7\pm 0.01}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{51.3\pm 3.5}\) & \(\mathbf{52.6\pm 0.0}\) & \(\mathbf{53.2\pm 1.7}\) & \(\mathbf{57.0\pm 0.02}\) \\ Wire-modal fin. & \(\mathbf{50.1\pm 2.3}\) & \(\mathbf{54.5\pm 1.7}\) & \(\mathbf{52.6\pm 0.0}\) & \(\mathbf{53.2\pm 1.7}\) & \(\mathbf{57.0\pm 0.02}\) \\ Pull model & \(\mathbf{50.3\pm 3.2}\) & \(\mathbf{54.5\pm 7.0}\) & \(\mathbf{50.8\pm 3.3}\) & \(\mathbf{70.2\pm 2.8}\) & \(\mathbf{96.4\pm 0.04}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study results for UNREST on new town and new weather conditions.
affected by stochasticity. Dynamic uncertainty estimation is also integrated during inference for cautious planning. Empirical results demonstrate UNREST's superior performance in various driving scenarios, its lower resource occupation, and the effectiveness of our uncertainty estimation strategy.
One limitation of this work is that the inference process of UNREST is somewhat complex with auxiliary return, uncertainty estimation models, and hyperparameters. One possible direction for improvement is to integrate return and uncertainty predictions into the model architecture, which we leave for future work. Although this work is evaluated in the CARLA simulator, we believe the proposed framework can surmount the sim-to-real gap and benefit practical autonomous driving. |
2310.00496 | The Sparsity Roofline: Understanding the Hardware Limits of Sparse
Neural Networks | We introduce the Sparsity Roofline, a visual performance model for evaluating
sparsity in neural networks. The Sparsity Roofline jointly models network
accuracy, sparsity, and theoretical inference speedup. Our approach does not
require implementing and benchmarking optimized kernels, and the theoretical
speedup becomes equal to the actual speedup when the corresponding dense and
sparse kernels are well-optimized. We achieve this through a novel analytical
model for predicting sparse network performance, and validate the predicted
speedup using several real-world computer vision architectures pruned across a
range of sparsity patterns and degrees. We demonstrate the utility and
ease-of-use of our model through two case studies: (1) we show how machine
learning researchers can predict the performance of unimplemented or
unoptimized block-structured sparsity patterns, and (2) we show how hardware
designers can predict the performance implications of new sparsity patterns and
sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps
performance experts identify sparsity regimes with the highest performance
potential. | Cameron Shinn, Collin McCarthy, Saurav Muralidharan, Muhammad Osama, John D. Owens | 2023-09-30T21:29:31Z | http://arxiv.org/abs/2310.00496v2 | # The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks
###### Abstract
We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approach does not require implementing and benchmarking optimized kernels, and the theoretical speedup becomes equal to the actual speedup when the corresponding dense and sparse kernels are well-optimized. We achieve this through a novel analytical model for predicting sparse network performance, and validate the predicted speedup using several real-world computer vision architectures pruned across a range of sparsity patterns and degrees. We demonstrate the utility and ease-of-use of our model through two case studies: (1) we show how machine learning researchers can predict the performance of unimplemented or unoptimized block-structured sparsity patterns, and (2) we show how hardware designers can predict the performance implications of new sparsity patterns and sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps performance experts identify sparsity regimes with the highest performance potential.
## 1 Introduction
Deep neural networks are often over-parameterized (Howard et al., 2019; Tan and Le, 2019) and their weights or parameters can be eliminated (_pruned_) to improve inference latency and/or decrease network size (LeCun et al., 1989; Han et al., 2015; Molchanov et al., 2017; Zhu and Gupta, 2018) without affecting accuracy. Depending on the _pattern_ and _degree_ of sparsity, which together constitute a _sparsity configuration_, networks exhibit widely different accuracy and runtime behavior. This presents major problems for machine learning practitioners who wish to find the best sparsity pattern and degree that balances accuracy loss and performance constraints for their specific application. Obtaining the accuracy corresponding to a sparsity pattern and degree typically requires some form of network fine-tuning (Frankle and Carbin, 2019), making it highly inefficient to estimate the impact of different sparsity configurations by trying hundreds of combinations of hyperparameters.
Thus we hope to predict which sparsity combinations might be most fruitful without fine-tuning them all. But accurately estimating the effects that a specific sparsity configuration has on inference runtime poses a different set of challenges: (1) which metric should we use to estimate runtime performance, and (2) how do we obtain the runtime performance of sparsity patterns that are either unimplemented or have unoptimized implementations? To illustrate the challenge of identifying the right metric, consider the total floating point operations (FLOPs) performed during sparse matrix operations such as matrix multiplication (a common operation in neural networks (Chetlur et al., 2014)). FLOPs are frequently used to evaluate the performance of pruned models (Han et al., 2015; Molchanov et al., 2017; Zhu and Gupta, 2018; Frankle and Carbin, 2019; Lee et al., 2019; Hoefler et al., 2021; Blalock et al., 2020). Table 1 illustrates the limitations of this metric. Here, we show two weight matrices that provide a counterexample to the notion that FLOPs are positively correlated with measured runtime. The structured weight matrix shown on the left side of the table has 1.57\(\times\) more FLOPs than the unstructured matrix on the right, but runs nearly 6\(\times\) faster.
Addressing the challenge of _estimating_ optimized runtime performance is even harder. While performance experts have implemented computation kernels specifically targeting sparse neural networks (Gale et al., 2020; Sarkar et al., 2020; Chen et al., 2021; Vooturi and Kothapalli, 2019), there are significant gaps. For example, NVIDIA's cuSparse library provides optimized GPU kernels for block-sparse matrices, but they are primarily optimized for larger block sizes such as 16\(\times\)16 and 32\(\times\)32 (Yamaguchi and Busato, 2021).
As discussed in Section 4.1, using smaller block sizes often leads to higher accuracies; however, in the absence of computation kernels optimized for these sizes, it is impossible
to estimate their effect on runtime via benchmarking.
To help practitioners better understand the complex relationship between sparsity configuration, accuracy, and inference performance (both current and potential), we introduce a novel visual model named the _Sparsity Roofline_. Our work builds upon the well-known Roofline model (Williams et al., 2009), which provides a visual representation of the performance of a given computation kernel.
In the Roofline model, users compute the _arithmetic intensity_ of the given kernel, and plot it against one or more hardware-specific upper limits (the Rooflines) defined by the peak memory bandwidth and peak floating-point throughput of that hardware architecture. In a similar vein, the Sparsity Roofline plots network accuracy against the theoretical speedup of sparse over dense models, with additional sparsity information. This clearly shows the two most important aspects of weight pruning to a machine learning practitioner--accuracy and performance--and can be analyzed across any model architecture, sparsity hyperparameters, or hardware accelerator. Plotting the Sparsity Roofline requires sampling the accuracy values corresponding to the sparsity configurations being analyzed, which can be easily done with masking-based approaches and existing software libraries (Paszke et al., 2019; Joseph et al., 2020). The only other metrics needed are the arithmetic intensity, which can be either profiled or computed by hand, and the hardware-specific peak computational throughput (in FLOPs/s) and memory bandwidth (in bytes/s).
We validate and demonstrate the usefulness of the Sparsity Roofline by analyzing several real-world computer vision models, including convolutional neural networks (CNNs), vision transformers (ViT), and multi-layer perceptron (MLP)-based networks. We investigate which sparsity characteristics have the greatest impact on accuracy and GPU performance, and point out promising areas to focus on for kernel optimization. Finally, we present two case studies: (1) analyzing tradeoffs associated with block-structured sparsity for deep learning practitioners, and (2) efficient sparsity patterns for future hardware architectures.
This paper makes the following contributions:
1. It introduces the Sparsity Roofline visual model for understanding accuracy vs. latency trade-offs for currently unoptimized and unimplemented kernel designs.
2. It uses the Sparsity Roofline to benchmark and analyze several real-world computer vision architectures pruned to a range of sparsity patterns and levels.
3. It demonstrates the use of the Sparsity Roofline in two distinct use cases: to analyze block-sparsity structures for DL practitioners, and to help inform future sparse hardware implementations.
## 2 Background
In this Section, we provide a brief overview of neural network pruning, followed by a description of the traditional Roofline model.
### Neural Network Pruning
Weight pruning involves setting a subset of neural network weights to zero, followed by a training or fine-tuning stage that attempts to recover any lost accuracy (Hoefler et al., 2021). Pruning can be unstructured (fine-grained), where individual non-zero values are eliminated, or structured (coarse-grained), where groups of non-zero values are removed instead, each resulting in a different _sparsity pattern_. The _sparsity level_ refers to the fraction of zero weights to total weights and is expressed as a percentage in this paper. Structured pruning has been demonstrated to achieve better runtime performance, typically at the cost of decreased accuracy (Narang et al., 2017; Vooturi et al., 2018; Li et al., 2022). A number of algorithms have been proposed in the literature for accuracy recovery of pruned models (Deng et al., 2021; Renda et al., 2020; Hoefler et al., 2021). In this paper, we use the learning rate rewinding approach proposed by Renda et al. (2020).
### The Roofline Model
The Roofline model (Williams et al., 2009) is a visual performance model that shows how well a computational kernel
\begin{table}
\begin{tabular}{c c c} \hline \hline Matrix Heatmap & & \\ \hline
**Runtime (ms)** & **6.613** & **3.526** \\
**GFLOPs** & **24.4** & **15.5** \\ TFLOPs/s & 39.9 & 4.4 \\ Number of Nonzeros & 1.95M & 1.23M \\ \(m\times k\)-dimensions & & \\ (sparse operand) & 3072\(\times\)768 & 3072\(\times\)768 \\ \(n\)-dimension & & \\ (dense operand) & 6272 & 6272 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Runtime vs. GFLOPs: SpMM performance on (32\(\times\)32) block sparsity vs. unstructured with a similar amount of nonzeros. White indicates zero-valued weights, blue non-zero. The block sparse matrix has more FLOPs but has a nearly 6\(\times\) better runtime latency vs. unstructured.**
utilizes the hardware. The Roofline model plots the arithmetic intensity (FLOPs computed / bytes read and written) on the x-axis and the throughput (FLOPs per second) on the y-axis. This enables users to visually observe if their program is memory-bound or compute-bound, and to what extent. The upper bound (Roofline) of the model is determined by both the hardware's peak compute throughput and peak memory bandwidth. Although there are variants that consider cache hierarchies (Ilic et al., 2014), the traditional Roofline model that we discuss in this paper assumes perfect caching is necessary (including user-managed caching such as shared memory and local registers) to achieve peak memory bandwidth utilization; we thus use DRAM memory bandwidth. The hardware throughput component can be increased with additional hardware acceleration for a specific application (e.g., Tensor Cores for deep learning (Jia et al., 2018)). The utility of the Roofline model comes from its ability to succinctly show potential improvement for a given program with respect to the hardware speed-of-light.
### Evaluating Sparse Neural Networks
Figure 1 plots the Roofline for individual SpMM matrices across all benchmarked computer vision models. The line in each plot is the "Roofline", which slopes upwards during the memory-bound region where the arithmetic intensity (AI) is too low to saturate the compute resources, and flattens out once the AI reaches a hardware-specific point, called the _knee_. The dashed line is for Tensor Cores and the solid line for CUDA cores, where the Tensor Core knee has almost 10x the AI of CUDA cores.
The points that are closest to the Roofline are utilizing the GPU the best, with higher sparsities being more memory bound and lower sparsities approaching and becoming compute bound in some situations, such as when the inner-dimension of the matrix product is higher. The Roofline model is a significant improvement over analyzing FLOPs, but it has three major drawbacks in optimizing sparse deep learning models:
1. The Roofline model lacks any concept of accuracy, and GFLOPs/s is challenging to use to compare the relative performance between sparse and dense layers.
2. The Roofline model is only meaningful per-layer instead of per-model. An entire model is almost always a combination of layers, where some are memory-bound and others are likely compute-bound. Therefore calling the entire model "compute bound" or "memory bound" is misleading at best.
3. The Roofline model requires benchmarking to compute GFLOPs/s. Even if optimal kernels exist, such as cuBLAS for dense GEMM operations, the surrounding benchmarking framework is time-consuming to implement, test, and maintain.
Our proposed solution, the Sparsity Roofline, directly addresses these concerns. It is not meant to replace the Roofline model, but instead _complement_ it for the specific use case of designing and optimizing sparse deep-learning kernels.
## 3 The Sparsity Roofline
The Sparsity Roofline is designed to be an easy-to-use tool for deep learning practitioners interested in sparsity, performance experts, and hardware designers. It achieves this goal by addressing the three major issues with the existing Roofline model described in Section 2.3.
The Sparsity Roofline plots accuracy vs. theoretical speedup, as opposed to the traditional Roofline's GFLOPs/s vs. arithmetic intensity. Accuracy is almost always the most important optimization metric in DNNs, and therefore we place it on the \(y\) axis. Similarly, replacing GFLOPs/s with theoretical speedup makes it far easier to understand relative performance differences of a sparse and dense layer or model. Further, the sparsity configuration is encoded into the point and/or line style in order to easily compare different sparsity design decisions, which are crucial for optimal performance.
The Sparsity Roofline converts per-layer peak GFLOPs/s to per-model minimum or _speed-of-light_ (SoL) latency. We first calculate a per-layer SoL latency, then sum the layer-wise latencies for the model SoL latency. This represents
Figure 1: **Roofline, Sparse vs. Dense**: Roofline model measuring throughput of SpMM on unstructured sparse layers and GEMM on dense layers from all trained models, on a single NVIDIA A100. The solid line is the CUDA core peak throughput, the dashed line the Tensor core peak throughput. Unstructured sparsity kernels in cuSPARSE do not use Tensor cores.
the true performance metric that practitioners care about: end-to-end latency of the entire model.
Like the traditional Roofline, the Sparsity Roofline does not require benchmarking. We only need to look up the hardware peak GFLOPs/s and peak GB/s of a hardware architecture, and compute the per-layer GFLOPs and GBs read/written by hand in order to calculate arithmetic intensity.
The Sparsity Roofline for unstructured sparsity is shown in Figure 2, and for ConvNeXt-Tiny and Swin-Tiny in Figures 3 and 4, respectively. We will now describe how these Sparsity Rooflines are constructed.
Given our model uses accuracy metrics, the model being used in the Sparsity Roofline needs to be fine-tuned to a given sparsity from a pre-trained dense model. Fine-tuning for sparsification is a standard practice in deep learning, and the only way to quantify accuracy. We use the learning-rate rewinding technique proposed by Renda et al. (2020) and the Condensa library by Joseph et al. (2020). Our model is most accurate when the sparse kernels are well optimized, and thus approaching the speed-of-light. This doesn't mean the sparse kernel needs to be compute bound, but if it is memory bound the closer it is to the device peak memory throughput, the more accurate our model is. This is discussed in detail in Section 3.4.
### Use Cases
The Sparsity Roofline is designed to quantify the performance-accuracy tradeoff for a specific combination of hardware, model architecture and sparsity configuration, such as sparsity pattern, sparsity level or percent, and sparse data format. Thus it can be used by both software and hardware engineers who want to understand how an optimized kernel would perform, but do not want to go through the trouble of implementing and benchmarking sub-optimal scenarios. In Section 4.1, we show how a deep-learning practitioner may use this tool to investigate optimal block-structure sparsity patterns, and in Section 4.2 we show how a hardware engineer can investigate different N:M sparsity patterns and sparse data formats to implement in hardware, e.g., for new sparse Tensor core formats.
In contrast, the Sparsity Roofline is not meant for engineers who already have a specific sparsity-configuration optimization target. In that scenario, a combination of the Roofline model, benchmarking / profiling, and lower-level optimizations are likely the correct tools to understand detailed performance statistics that would inform kernel design, such as load balancing and caching.
### Constructing the Sparsity Roofline
The Sparsity Roofline plots accuracy vs. theoretical speedup from sparsity. We start by deriving the theoretical speedup.
First, we need to define the kernel's GFLOPs and GBs read/written to global memory. Equation 1 shows this for SpMM (\(\text{Sparse}\times\text{Dense}=\text{Dense}\) matrix multiply); the index data depends on the sparse data format. For compressed row format (CSR), it is \(\textit{nnz}+m+1\).
\[\begin{split}\text{SpMM FLOPs}&=\textit{nnz}\times n \\ \text{SpMM GB}&=\textit{nnz}+n\times k+m\times n+ \text{index data}\end{split} \tag{1}\]
Next, we define the per-layer speed-of-light latency as the maximum runtime for the kernel's given GFLOPs and GBs read/written to global memory. Using the device's peak GFLOPs and GB/s, this is computed as
\[\text{Per-Layer SoL}=\text{max}\bigg{(}\frac{\text{GFLOP}}{\text{ Peak GFLOP/s}},\frac{\text{GB}}{\text{Peak GB/s}}\bigg{)} \tag{2}\]
Finally, we sum the \(L\) per-layer runtimes for the dense model and the same corresponding sparse model, and take their runtime ratio as the speedup, using the dense computation as the baseline. For example, if the sparse latency is 1 ms and the dense latency is 2 ms, the speedup would be 2x.
Figure 2: **Per-Model Sparsity Roofline**: The Sparsity Roofline for several computer vision models on ImageNet-100 pruned with global magnitude pruning. Speedup is calculated per-layer using the maximum compute or memory bound latency, and then summed per model. The machine learning engineer can choose the architecture that provides the optimal balance of accuracy, speedup, and implementation difficulty.
\[\text{Speedup at SoL}=\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{ \sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}} \tag{3}\]
These equations make the same assumption as the Roofline model: the maximum achievable FLOPs/s is the hardware's peak compute throughput, and each byte of data may be read from or written to global memory once, at the hardware's peak memory throughput, with perfect caching (including shared memory or local registers) for any intermediate reads.
### Evaluating Accuracy
To compute accuracy for each model and sparsity configuration, we start by pre-training one baseline model per architecture. We pre-train without sparsity for 300 epochs on ImageNet-100 (Vinyals et al., 2016). This dataset is a subset of the ImageNet-1K dataset (Deng et al., 2009) created by sampling 100 of the 1000 classes in ImageNet-1K, which allows us to train a larger number of models, sparsity patterns, and sparsity levels.
All model definitions are from the _timm_ library (Wightman, 2019) and each is trained with the same set of data augmentations, hyperparameters, and training schedules based on modern architectures such as DeiT (Touvron et al., 2021), Swin (Liu et al., 2021) and ConvNeXt (Liu et al., 2022). This includes data augmentations RandAugment (Cubuk et al., 2020), MixUp (Zhang et al., 2018) and CutMix (Yun et al., 2019), a cosine decay learning rate schedule (Loshchilov and Hutter, 2017), and the AdamW optimizer (Loshchilov and Hutter, 2019) with a base learning rate of \(10^{-3}\) and 20 epochs of warm up. Using these uniform settings across all models ensures a fair comparison with an identical training procedure. We store the checkpoint with the minimum validation loss and use this for fine-tuning.
We apply an incremental fine-tuning algorithm based on learning rate rewinding (Renda et al., 2020) to the baseline model to obtain the accuracy values corresponding to the following sparsity levels: 50%, 75%, 87.5%, 93.75% and 96.875%. This pattern involves halving the number of nonzeros per iteration, which ends up slightly biasing the results towards higher sparsities where sparse kernels are typically more performant.
For a given combination of model and sparsity pattern, e.g., ConvNeXt-Tiny with unstructured sparsity, we prune the weights with global magnitude pruning to the lowest sparsity level of 50%. We rewind the learning rate schedule but with a shorter 60 epoch total decay rather than 300 epochs. After 60 epochs we increase the sparsity level by \((1-\text{Sparsity})/2\), prune the additional weights, and rewind the learning rate again. We repeat this a total of five times within a single run to fine-tune five sparsity levels for our model / sparsity pattern combination in 300 epochs total, which is the same number as during training. We find this process to be simple and efficient, and quantitatively works well for ImageNet-100. For more challenging datasets such as ImageNet-1k or ImageNet-22k, the fine-tuning schedule would likely need to be increased.
### Validation
It is important to understand the cases where speed-of-light (SoL) speed-up equals the actual measured speed-up, without having to implement and optimize a specific sparse kernel. We can easily show that the speedup at SoL is precisely equal to the measured speed-up when the sparse and dense kernels are _equally optimized_. Specifically, at a per-layer level this occurs when the percentage of the per-layer SoL latency for dense and sparse are equal. For example, if a given GEMM kernel is compute bound and obtains 90% of the SoL GFLOPs/s, and the corresponding SpMM kernel is memory bound and also obtains 90% of the SoL GB/s, then the percent of SoL is identical and our model will predict a SoL speedup that is equal to the measured speedup. More formally:
\[\text{Per-Layer Speedup at SoL} \stackrel{{?}}{{=}}\text{Per-Layer Speedup Meas.}\] \[\frac{\text{Dense SoL Runtime}}{\text{Sparse SoL Runtime}} =\frac{\text{Dense Meas. Runtime}}{\text{Sparse Meas. Runtime}}\] \[\frac{\text{Dense SoL Runtime}}{\text{Dense Meas. Runtime}} =\frac{\text{Sparse SoL Runtime}}{\text{Sparse Meas. Runtime}}\]
Dense Per-Layer % of SoL
In the last equation, note that the percent of speed-of-light (or fraction of speed-of-light) is defined as the ratio between the SoL latency to the measured latency. The measured latency can take on values as small as the SoL latency but no smaller, by definition. Therefore this is bounded between 0-1 (or 0-100%).
The same equation holds for per-model aggregation, but in this case each individual term is a summation of all layers.
\[\text{Per-Model Speedup at SoL} \stackrel{{?}}{{=}}\text{Per-Model Speedup Meas.}\] \[\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{\sum_{l= 1}^{L_{\text{dense}}}\text{Meas. Runtime}_{l}} =\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{\sum_{l= 1}^{L_{\text{dense}}}\text{Meas. Runtime}_{l}}\]
Dense Per-Model % of SoL \(=\text{Sparse Per-Model }\%\) of SoL
At the aggregated per-model level, the SoL speedup is equal to the measured speedup when the sparse and dense models are equally optimized, such that the percentage of the per-model SoL latency for dense and sparse are equal.
## 4 Case Study
### DL Practitioner
Suppose Alice is researching pruning algorithms and wants to find out whether block sparsity can provide effective inference latency improvements on NVIDIA GPUs for ConvNext () and Swin (), two state-of-the-art computer vision models. She would typically start by training, pruning and then fine-tuning these models for various block sizes, say, 2\(\times\)2, 4\(\times\)4, 8\(\times\)8, 16\(\times\)16 and 32\(\times\)32, to capture a sufficiently large sample of the search space.
Alice would like to compare the speedups that her block pruning scheme achieves w.r.t. unstructured global magnitude pruning, but she would prefer to avoid implementing a custom block-sparse GPU kernel until she is sure it's the right approach. She then considers using existing kernels from a vendor-optimized library such as cuSparse (), but backs off due to two reasons: (1) writing a custom operator for a deep learning framework is not trivial, and (2) she notices in the documentation for the vendor-optimized library that it achieves poor performance for smaller block sizes, and may thus not provide a fair comparison across block sizes.
Rather than trying to measure actual latency numbers, Alice now plans to use some simple metrics to estimate potential speedups. She starts by counting the FLOPs of each sparse model. However, since her blocked SpMM and unstructured SpMM kernels would be running on NVIDIA Tensor Cores and CUDA cores, respectively, the former will end up achieving higher throughput than the latter. Additionally, since Tensor Cores necessitate more efficient memory bandwidth utilization, she would also need to account for the reads and writes that her sparse models perform during inference.
To address the above concerns, Alice instead generates the Sparsity Roofline for the block-sparse models she has trained to quickly approximate the speedups she would achieve for various block sizes. Figures 3a and 3b show the Sparsity Roofline models Alice would generate for ConvNext and Swin with a batch size of 1. By observing the accuracy and performance tradeoffs that the Sparsity Roofline depicts, Alice is now able to determine that her models achieve higher speedups using larger block sizes, but they only maintain accuracy with smaller block sizes of 2\(\times\)2 and 4\(\times\)4. _Importantly, Alice was able to arrive at this conclusion without needing to go to the effort of writing her own optimized sparse kernels for a variety of block sizes._ She now realizes that if she invests her time in optimizing for smaller block sizes, she will get reasonable speedups without sacrificing accuracy.
### Hardware Architect
Bob is a hardware architect designing next-generation Tensor Cores for future GPUs and is investigating alternative N:M patterns for future hardware support. He would like to quickly assess the accuracy and performance implications of the new N:M patterns before he puts in any effort into design and simulation. His goal is to find patterns that achieve accuracy numbers similar to the currently supported 2:4 pattern, but are at least 30% faster given the same Tensor Core throughput.
Bob's target workload for these N:M patterns is inference with a batch size of 1 on ConvNeXt and Swin. These two network architectures, in addition to providing state-of-the-art accuracies on their tasks, are also comprised of a variety of layer types, and involve matrix operations of various shapes and sizes, making them fairly representative. The N:M schemes he chooses to investigate are 1:4, 2:8 and 2:16, in addition to the pre-existing 2:4 pattern.
Bob works with a machine learning engineer to get these two networks trained, pruned, and fine-tuned for each of the above sparsity patterns, and then obtains the corresponding accuracy numbers. He now needs to determine how these models would perform if hardware support for the new N:M patterns was available.
Instead of developing RTL code for these new hardware units and simulating the workloads, which would be labor-intensive and time-consuming, Bob would prefer a quicker way of estimating the runtime performance of each of these pruned models on their respective hypothetical hardware units. Bob could simply use FLOPs to estimate speedups for each pattern (e.g., going from 2:4 to 1:4 is a 2x speedup); however, note that Bob would also need to account for the memory system's ability to keep up with the Tensor Core's throughput to get a more accurate performance estimation.
To address these concerns, Bob constructs the Sparsity Roofline for the N:M pruned models to quickly estimate the speedups he would achieve w.r.t. the accuracy. The resulting Sparsity Roofline plots are shown in Figures 4a and 4b. From the Sparsity Roofline, Bob notices that at the same Tensor Core throughput, 2:16 sparsity achieves nearly a 1.8\(\times\) speedup over dense and is over 30% faster than the 2:4 sparsity pattern, meeting his original goal. He also notices that the 1:4 and 2:8 patterns are promising in cases where accuracy preservation is more important than raw speedup. Similar to Alice (see Section 4.1), Bob was able to estimate his performance metrics significantly faster using the Sparsity Roofline.
## 5 Discussion
### Unstructured Sparsity
Global magnitude pruning with re-training has become a widely applicable technique due to its simplicity and effectiveness. Figure 5 shows how this technique can reach almost 90% sparsity with minimal accuracy loss. In the context of small computer vision models, Figure 2 indicates that accuracy can only be preserved to about a \(1.5\times\) speedup over dense. While a 50% speedup would be somewhat substantial, the time cost of fine-tuning may not be worthwhile in every scenario. Additionally, a 50% speedup is far less than what FLOP counts would suggest. At 87.5% sparsity, a network requires only \(1/8\) the FLOPs of the original, yet Figure 2 tells us that a \(8\times\) speedup is infeasible in any case. To make sparsity generally viable from a performance perspective, we need to understand and alleviate the underlying factors that inhibit SpMM from achieving the speedups suggested by the FLOP reduction. Despite the wide range of factors that affect SpMM kernel performance on GPUs, such as load balancing and efficient data reuse Gale et al. (2020); Bell and Garland (2009), we only consider the factors that make up the Sparsity Roofline. Thus, in our analysis, we account for FLOPs, bytes read/written, hardware peak throughput, and hardware peak memory bandwidth (the same as the Roofline model).
One of the most glaring downsides of unstructured sparsity is its inability to leverage the GPU's tensor cores that are effectively leveraged by dense models.1 The Roofline model
Figure 4: **N:M Sparsity Roofline: The Sparsity Roofline for (a) ConvNext-Tiny and (b) Swin-Tiny on ImageNet-100 pruned with various N:M patterns. Calculations are done using a batch size of 1 and NVIDIA A100 hardware specs.**
Figure 3: **Block-Sparsity Roofline: The Sparsity Roofline for (a) ConvNext-Tiny and (b) Swin-Tiny on ImageNet-100 pruned with various block pruning sizes. Calculations are done using a batch size of 1 and NVIDIA A100 hardware specs.**
in figure 1 shows the elevated peak tensor core throughput above the peak CUDA core throughput. For the A100, the tensor core throughput is 16x faster than the CUDA core throughput (NVIDIA, 2020). To address the hardware discrepancy and put sparse and dense on a level playing field, we opt to investigate sparsity structures that can leverage the tensor cores.
### Block Sparsity
The Sparsity Roofline shows two benefits of block sparsity: (1) the ability to use the high-throughput sparse tensor cores, and (2) the reduced index data from the block sparse format. The reduced index data results from the sparsity pattern's more coarse-grained structure, where a single block index refers to multiple nonzeros. The index data is reduced by a factor of the block size.
Despite the reduction in reads and writes from block sparsity, Figure 6 shows that the vast majority of block-pruned weights are still memory bound. Because of this, the Sparsity Rooflines for different block sizes in figure 2(a) and figure 2(b) see only a small improvement compared to unstructured sparsity. The accuracy-speedup tradeoff is slightly better than unstructured sparsity at best, and only just as good in the worst case.
While the heatmap in Table 1 suggests that block sparsity should perform much better than unstructured, we observe that the accuracy loss from large block sizes (16\(\times\)16 and 32\(\times\)32) is too significant to be viable. When we therefore restrict our analysis to smaller block sizes, we see that we can't achieve the full throughput from the tensor cores due to the memory bottleneck seen in Figure 6. The smaller block sizes are completely memory-bound, whilst the larger block sizes are less so, and can thus get more throughput from the tensor cores.
### N:M Sparsity
NVIDIA's sparse tensor cores provide an interesting alternative to block sparsity, allowing adopters to leverage the throughput of the tensor cores whilst being able to prune weights in a fine-grained manner. While the coarse-grained structure of block sparsity restricts the freedom of pruning algorithms' weight selection and hurts accuracy, the fine-grained structured sparsity for the N:M patterns should theoretically hurt accuracy less.
In addition to the accuracy benefits of a fine-grained structure, the N:M formats can reduce the memory overhead for indexing data. With dedicated hardware support, N:M formats only need \(\log_{2}(M)\) bits to store the index of each nonzero inside the \(M\)-wide blocks; for 2:4, that's only 2 bits per nonzero.
Figures 3(a) and 3(b) show the Sparsity Roofline for N:M formats. We see that the various N:M patterns achieve a better performance-accuracy tradeoff over unstructured than what
Figure 5: **Accuracy vs. Sparsity and FLOPs: A common but misleading means of evaluating sparse models. Plotting accuracy (here ImageNet-100 top-1 accuracy) vs. sparsity (top) and FLOPs (bottom) for various models implies higher sparsity means higher GPU performance, which does not take memory bandwidth into account.**
Figure 6: **Roofline, Block Sparse vs. Dense: Roofline model measuring throughput of SpMM on all block sparse layers and GEMM on dense layers from all trained models, on a single NVIDIA A100.**
block sparsity was able to achieve. N:M is an improvement over block sparsity in our pruned networks due to the reduced accuracy degradation and minimal index data overhead.
### Feature Overhead
Finally, we have not yet mentioned the read and write overhead of the input and output features of each layer. Equation 1 shows the data for the input and output features as \(n\times k\) and \(m\times n\) (respectively). Akin to Amdahl's law, we can only expect to reduce the number of memory accesses for pruned matrices. Therefore, regardless of our pruning strategy, the input and output features will always incur a fixed number of reads and writes as overhead. Figure 7 shows the severity of this problem. For a batch size of 1, the feature memory accesses, which cannot be reduced via pruning, account for half of all accesses. For a batch size of 32, the feature memory accesses heavily dominate the overall number of accesses, making it difficult to decrease the memory bottleneck of our sparse models.
The \(n\) dimension in equation 1 is shared by the input and output feature matrices and is not one of the weight matrix dimensions. The size of \(n\) relative to \(m\) and \(k\) is determines the appearance of the graphs in Figure 7. The \(n\) dimension scales linearly with both the batch size and the number of spatial locations in the feature data (for both convolution and transformer FFN layers). This suggests that we will see larger speedups from pruning when the model size (\(m\) and \(k\)) is large relative to the batch size and feature sizes (\(n\)).
## 6 Related Work
Automated Model CompressionRecent work has explored various approaches for automatically inferring optimal sparsity levels using approaches such as Bayesian Optimization (Joseph et al., 2020) and reinforcement learning (He et al., 2018). Our work differs in two ways: we focus on providing (1) a _visual_ representation of the accuracy and performance landscape for different sparsity patterns and levels, and (2) meaningful estimates of potential inference runtimes to aid deep learning practitioners, performance experts and hardware designers.
Deep Learning Roofline ModelsThe Roofline model has been applied to the deep learning problem space in the past (Yang et al., 2020; Wang et al., 2020; Czaja et al., 2020). However, this work primarily focuses on dense neural networks. Specifically, Wang et al. (2020) extend the Roofline model to deep learning by using latency and compute/bandwidth complexity. Yang et al. (2020) provide a toolkit extension for deep learning to support new precisions, tensor cores, and a tool for measuring performance metrics. Czaja et al. (2020) perform a Roofline analysis of DNNs accounting for non-uniform memory access (NUMA) systems.
|
2309.08181 | Large Language Models for Failure Mode Classification: An Investigation | In this paper we present the first investigation into the effectiveness of
Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the
task of automatically labelling an observation with a corresponding failure
mode code, is a critical task in the maintenance domain as it reduces the need
for reliability engineers to spend their time manually analysing work orders.
We detail our approach to prompt engineering to enable an LLM to predict the
failure mode of a given observation using a restricted code list. We
demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on
annotated data is a significant improvement over a currently available text
classification model (F1=0.60) trained on the same annotated data set. The
fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This
investigation reinforces the need for high quality fine-tuning data sets for
domain-specific tasks using LLMs. | Michael Stewart, Melinda Hodkiewicz, Sirui Li | 2023-09-15T06:13:01Z | http://arxiv.org/abs/2309.08181v1 | # Large Language Models for Failure Mode Classification: an Investigation
###### Abstract
In this paper we present the first investigation into the effectiveness of Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the task of automatically labelling an observation with a corresponding failure mode code, is a critical task in the maintenance domain as it reduces the need for reliability engineers to spend their time manually analysing work orders. We detail our approach to prompt engineering to enable an LLM to predict the failure mode of a given observation using a restricted code list. We demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on annotated data is a significant improvement over a currently available text classification model (F1=0.60) trained on the same annotated data set. The fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This investigation reinforces the need for high quality fine-tuning data sets for domain-specific tasks using LLMs.
Technical Language Processing Failure Mode Large Language Models Maintenance
## 1 Introduction
The maintenance of assets plays a critical role in the safety and costs of industrial organisations. One of the key tasks within maintenance is failure mode identification. This task is done by reliability engineers to capture and code failure and other undesirable events. These failure mode codes, together with data such as the cost/ production/ service impact, safety and environmental consequence of the event are used to prioritise improvement work, update maintenance strategy and can assist product/ plant engineers to improve future design by updating their failure modes and effects analysis. Consistent and reproducible failure mode code assignment is difficult as the observation of each event are captured by field technicians in natural language. For example, consider the following maintenance work order texts:
* pump runs for a while and trip
* engin does not work
* pmp spraying out slurry
* seal leaking
* leak in seal
Each of these work orders contain an observation made by the field technician, such as "does not work", "leaking", and so on. In any maintenance management system there are thousands of these observations and each needs a failure mode classification (FMC), such as "leaking" and "breakdown" according to an agreed list. The challenge, is that each person doing the coding, whether it be the technician generating the work order, or the reliability engineer reviewing it, comes with their own mental model of the asset and its behaviour [Sexton et al., 2019]. Further, attention to the task of coding accurately is influenced by factors such as training, managerial support, technological input control and motivation [Murphy, 2009, Unsworth et al., 2011, Molina et al., 2013]. It is too expensive to have university-trained reliability engineers review each of these codes manually given the volume. The opportunity for AI to assist in failure mode classification is therefore an active research area [Sexton et al., 2018, Akhbardeh et al., 2020, Sala et al., 2022, Stewart et al., 2022, Usuga-Cadavid et al., 2022].
There has recently been a surge of interest in Large Language Models (LLMs), predominately as the result of the popularity of chatbot interfaces such as ChatGPT1. LLMs such as OpenAI's GPT-3.52 have been trained on massive corpora and thus encapsulate knowledge from a wide variety of domains. It has also been shown that LLMs require little to no fine-tuning, meaning they exhibit excellent performance with barely any annotated training data [Brown et al., 2020]. Rather than focusing on developing manually-annotated datasets to train models (like with more "traditional" text classification models such as Flair [Akbik et al., 2018]), users of LLMs typically employ _prompt engineering_ in order to craft their input prompt to elicit a particular response from the model. As a result of their excellent performance on a wide range of natural language processing tasks, LLMs have already been applied to a variety of domains. Examples include medicine [Singhal et al., 2022, Thirunavukarasu et al., 2023], education [Kasneci et al., 2023], and vehicle accident records [Mumtarin et al., 2023].
Footnote 1: [https://chat.openai.com/](https://chat.openai.com/)
Footnote 2: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models)
However, to the best of our knowledge, no research has yet investigated the use of LLMs within the maintenance domain, let alone specifically for FMC. In light of this research gap, and the potential for automated FMC to enable significant time and cost benefits to industry, we present an investigation into the effectiveness of using Large Language Models for Failure Mode Classification. Our contributions are as follows:
* We investigate the most effective prompt format for performing FMC using an LLM without any fine-tuning.
* We determine whether it is necessary to fine-tune an LLM on a set of annotated data to achieve good FMC performance.
* We provide a comparison between the performance of fine-tuned LLMs and text classification models for FMC.
This paper is structured as follows. We begin by providing an outline of our models, methods and experiments, and detail the dataset that we use for fine-tuning and evaluation. We then present our results, which directly tie in to our contributions above. Finally, we present our conclusion and an outlook to future work.
The source code of this paper is open source and is available on GitHub.
## 2 Methods
The aim of this paper is to evaluate the applicability of Large Language Models (LLMs) to Failure Mode Classification (FMC). In this section we provide an overview of the dataset we are using for our evaluation, as well as the models that we evaluate in Section 3.
### Dataset
The dataset on which we evaluate each model is an extract from the annotated maintenance work order dataset introduced by [Stewart et al., 2022] and available on PapersWithCode3. The data set consists of 502 (observation, label) pairs for training, 62 for validation, and 62 for testing. The observations, which are written in natural language, were extracted from a set of maintenance work orders using Named Entity Recognition (NER). The labels are taken from a set of 22 failure mode codes from ISO 14224 4. Each observation was labelled by a domain expert. Some examples from this dataset are as follows:
Footnote 4: [https://www.iso.org/standard/64076.html](https://www.iso.org/standard/64076.html)
* broken, Breakdown
* leaking fluid, Leaking
* too hot, Overheating
* triping, Electrical
* not starting, Failure to start on demand
This open data set and the model presented in [Stewart et al., 2022] represent the state-of-the-art for FMC in the literature at this point in time and hence are used for comparative purposes.
### Models
We evaluate the following models:
1. **Flair**: A Flair-based [Akbik et al., 2018] text classification model, trained on the annotated dataset.
2. **GPT-3.5**: The off-the-shelf GPT-3.5-Turbo model from OpenAI.
3. **GPT-3.5 (Fine-tuned)**: The GPT-3.5-Turbo model, fine-tuned on the annotated dataset.
The Flair model is a Bidirectional Long Short-Term Memory-based [Hochreiter and Schmidhuber, 1997] text classification model that takes a sequence of text as input, and predicts a single label. This is the same model as used in [Stewart et al., 2022], and further implementation details are available in the respective paper.
The first layer of the model, the embedding layer, was pre-trained by the Flair developers on a corpora of web, Wikipedia data, and subtitles, and thus the model has little innate knowledge of maintenance. The model was trained by [Stewart et al., 2022] on the dataset of 502 (observation, label) pairs and validated on the 62-pair validation set. In contrast to the GPT-based models, the computational requirements of training and using this model are low enough to be able to train on most desktop computers. This also means it can be used offline, and is thus appropriate for handling sensitive data.
The LLM-based models are based on OpenAI's GPT-3.5 [Brown et al., 2020]5, the model behind ChatGPT6. The GPT-3.5 model is "off-the-shelf" in that we are using the model without any form of fine-tuning. We are relying on the model's knowledge of maintenance that it has gleaned from its massive training corpora in order to task it to perform failure mode classification. The GPT-3.5 (Fine-tuned) model, on the other hand, is fine-tuned on the annotated dataset of 502 (observation, label) pairs, and validated on the 62-pair validation set.
Footnote 5: GPT-4.0 was not available for fine-tuning as of the time of writing, hence the decision to use GPT-3.5.
### Data preparation
[{ "role": "system", "content": "Determine the failure mode of the observation provided by the user." }, { "role": "user", "content": "too hot" }, { "role": "assistant", "content": "Overheating" }]
Listing 1: An example prompt that is fed into the GPT-3.5 and GPT-3.5 (Fine-tuned) models. The role of the assistant is only used during fine-tuning.
The default behaviour of the GPT-based models is to act as a chatbot, and thus it will not respond with a failure mode code for a given observation unless the instruction to do so is included as part of the prompt. Structuring an input prompt to elicit a particular response from a large language model is known as _prompt engineering_.
The latest versions of the GPT-based models require a three-part prompt. The system-level prompt dictates the desired response format of the model. For example, one can use this prompt to ask the model to reply in a sarcastic tone, or to reply with a one-word answer, and so on. The user-level prompt is the input from the user. Finally, the
assistant-level prompt is the desired input from the LLM (this is used when fine-tuning to inform the model of the expected output).
To create the prompts, we wrote Python code to iterate through the annotated CSV-based dataset and convert each (observation, label) pair into a prompt as shown in Listing 1. The same system-level prompt is used for each input to the model, and describes the task to perform (failure mode classification). We use the user-level prompt to provide the model with the observation that we want it to label. During the fine-tuning of the GPT-3.5 (Fine-tuned), we include an assistant-level prompt that informs the model of the desired output for each observation (i.e. the failure mode). The design behind these prompts were based on the best practices listed in the OpenAI Documentation7.
Footnote 7: [https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples](https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples)
In our experiments we also investigate the necessity to add the following two texts to the system-level prompt:
* In Section 3.1, we include the sentence "Your answer should contain only the failure mode and nothing else." to instruct the language model to avoid outputting unnecessary text (e.g. "The failure mode is...", etc.
* In Section 3.2 we include "Valid failure modes are: " followed by a newline-separated list of valid labels from the dataset. This is an attempt to ensure that the model does not come up with its own failure modes, but instead outputs a failure mode code from the prescribed list.
### Evaluation metrics
In the same manner as [Stewart et al., 2022], we evaluate each model using Micro F1 and Macro F1 score. Micro F1 calculates an F1-Score by adding the true positives (TPs), false positives (FPs) and false negatives (FNs) from all class labels together and then calculating F1-Score:
\[Micro1=F1_{(class1+class2+...+classn)} \tag{1}\]
Macro f1, on the other hand, simply averages the F1-Score of each class. Given N is the number of class labels, it is calculated as follows:
\[MacroF1=\frac{\sum_{n\in N}F1_{class_{n}}}{N} \tag{2}\]
## 3 Results
This section aims to answer the following questions:
1. How best to use an off-the-shelf Large Language Model (LLM) to perform Failure Mode Classification (FMC)?
2. Is it necessary to fine-tune the LLM to perform FMC?
3. Are LLMs more effective at FMC than text classification models?
4. What are some barriers one may face when using LLMs for FMC?
### How best to use an off-the-shelf LLM to perform Failure Mode Classification?
To address the first research question we begin by investigating the use of a simple system-level prompt of "Determine the failure mode of the observation provided by the user.". Upon feeding this prompt into the model, along with
\begin{table}
\begin{tabular}{l|l}
**Observation** & **LLM output** \\ \hline runs for a while and trip & The failure mode of the observation "runs for a while and trips" suggests an electrical failure. This could be due to an overload, short circuit, or other electrical issue that causes the equipment to shut down or trip a circuit breaker. \\ does not work & The failure mode of the observation "does not work" is a breakdown. \\ spilling & The failure mode of the observation "spillage" is a leakage. \\ spraying out slurry & The failure mode of the observation "spraying out slurry" is leaking. \\ \end{tabular}
\end{table}
Table 1: Some examples of predictions made by the off-the-shelf GPT-3.5-Turbo on a sample of the test data. The system-level prompt is “Determine the failure mode of the observation provided by the user.”
the user-level prompt (the observation, e.g. "runs for a while and trip"), the LLM produces outputs as shown in Table 1. These outputs, which are conversational in nature, are not machine-readable and are therefore not applicable to downstream analysis. A more specific prompt is needed to perform FMC.
In light of this, we next add the phrase "Your answer should contain only the failure mode and nothing else." to the system-level prompt. Adding this sentence to the prompt results in the model predicting a single failure mode for each observation, as shown in Table 2. However, there are several notable issues with the outputs of the model after adding this phrase. Firstly, despite the addition of the phrase in the prompt, the model still occasionally adds additional text to its response. One such example is its response for the phrase "failed electrical", to which it also adds "Failure mode: " prior to the actual classification. It also occasionally disregards the instruction when it was not capable of recognising a particular failure mode, for example in its classification of "high earth reading".
While the LLM is capable of predicting failure modes using this prompt, they are not aligned with any particular failure mode ontology. Downstream analysis using these failure modes is thus not possible, due to the sheer number of possible failure modes and inconsistency between them. For example, the model predicts both "Leakage" and "Leaking", which are the same failure mode written two different ways. One can liken the LLM's predicted failure modes to that which might be produced by a layperson, i.e. not a domain expert.
The non fine-tuned model also has difficulties producing consistent failure mode labels when dealing with uncertainty. When the model is unable to classify the observation, it responds in a variety of different ways, for example "Insufficient information", "N/A", "None", "No failure mode detected.", "No failure mode provided.", and so on. Attempting to resolve all possible variations of these phrases into a single classification (such as "Unknown" or "Other") is a non-trivial task, and thus the outputs of this model are not readily applicable to downstream tasks.
In an attempt to solve this issue we add a final phrase to the prompt: "Valid failure modes include: " followed by a newline-separated list of the failure mode labels appearing across the entire dataset. We found that this addition generally causes the model to behave as expected. However, it occasionally hallucinates labels: for example, it predicts the label "Fail to open" for "sticking shu", and "Fail to adjust" for "cant be adjusted". It also has issues with label consistency - for example, it predicts both "Fail to function" and "Failure to function". Similarly to the previous attempt without constraining the label space, this attempt at using the LLM directly without fine-tuning is not directly applicable to failure mode analysis as a result of these issues.
In summary we have demonstrated that it is possible to engineer the prompt to enable the LLM to predict failure mode codes without any fine-tuning. However, these outputs are not grounded in any particular ontology and are inconsistent.
### Is it necessary to fine-tune the LLM to perform Failure Mode Classification?
We now aim to determine whether fine-tuning the LLM on a purpose-built dataset is necessary, or whether similar performance can be achieved without fine-tuning. We focus our attention on a comparison between the GPT-3.5 model, and GPT-3.5 (Fine-tuned). The former model has been fed with the prompt discussed at the end of 3.1, i.e. it constrains the model to predict only the failure mode and nothing else, and also provides it with a list of the valid failure modes from the dataset. The latter model has been fine-tuned on the 500 (observation, label) pairs in the training dataset, and the prompt does not contain the aforementioned constraints (as they are not necessary due to the fine-tuning).
Table 3 shows the results of each model on the test dataset. It is clear that fine-tuning has a significant impact on performance, as the Micro-F1 score jumps from 0.46 to 0.81 between the non fine-tuned and fine-tuned models respectively. The results of the non fine-tuned model indicate that it does possess knowledge of maintenance, though, as it was capable of getting nearly half of all predictions correct without any form of fine-tuning.
\begin{table}
\begin{tabular}{l|l}
**Observation** & **LLM output** \\ \hline runs for a while and trip & Overheating \\ very stiff to operate & Stiff operation \\ requires rebuild & Noisy operation \\ has no equipment earth & N/A \\ high earth reading & No failure mode can be determined from the given observation. \\ failed electrical & Failure mode: Electrical failure \\ \hline \end{tabular}
\end{table}
Table 2: Some examples of predictions made by the off-the-shelf GPT-3.5-Turbo on a sample of the test data. The system-level prompt is “Determine the failure mode of the observation provided by the user. Your answer should contain only the failure mode and nothing else.”
We also tested the effectiveness of "few-shot learning", i.e. providing a list of example (observation: failure mode) pairs to the model as part of the system-level prompt as opposed to a list of only the valid failure modes. We found that the results were near identical to the non fine-tuned model, and thus did not include these results in the table for brevity. Overall, the results show that fine-tuning is necessary to achieve strong performance. This demonstrates the importance of high quality annotated data when applying LLMs to maintenance work orders.
### Are LLMs more effective at failure mode classification than text classification models?
To answer this final research question we focus our attention to a comparison between the Flair text classification model from (Stewart et al., 2022) and the GPT-3.5 models. As shown in Table 3, the LLM significantly outperforms Flair, but only after fine-tuning. Without fine-tuning, Flair exhibits much stronger performance, indicating the necessity of annotated training data to be able to perform this particular task.
After fine-tuning on the annotated data, the LLM performs significantly better than Flair. It also tends to fair better on the minority classes, such as "Failure to start on demand", "Failure to stop on demand", etc, which we argue can be attributed to the underlying knowledge made available as part of the LLM's lengthy training process on a large corpora.
In summary, our results show this LLM is more effective at FMC than the text classification model, but only when the LLM is fine-tuned to perform this task.
### What are some barriers one may face when using LLMs for FMC?
Overall we found the process of using and fine-tuning GPT-3.5 fairly straightforward, though we experienced a couple of issues that are worth noting. Firstly, the non-deterministic nature of LLMs mean that they can produce different output given the same input. There is a built-in temperature parameter which can be set to 0 to reduce the likelihood of this occurring, but in our experience we were still receiving slightly different results each time we ran our experiments. This effect is most noticeable in the non fine-tuned model with no prompt engineering (i.e. from Section 3.1, and has less of an effect when the model is informed of the list of valid labels.
We also noticed that during inference, the OpenAI API would occasionally refuse our requests due to being overloaded, causing us to have to start the inference process again. This was not a significant problem for our small 62-record test set, but it would be more problematic when running inference over a large dataset.
\begin{table}
\begin{tabular}{l|c|c|c|c}
**Failure mode** & **Support** & **Flair** & **GPT-3.5** & **GPT-3.5 (FT)** \\ \hline Abnormal instrument reading & 1 & 1.00 & 1.00 & 0.00 \\ \hline Breakdown & 7 & 0.37 & 0.44 & **1.00** \\ \hline Contamination & 1 & 1.00 & 1.00 & 1.00 \\ \hline Electrical & 6 & 0.67 & 0.50 & 0.67 \\ \hline Erratic output & 1 & 0.00 & 0.00 & 0.00 \\ \hline Fail to function & 3 & **0.50** & 0.00 & 0.00 \\ \hline Failure to start on demand & 1 & 0.40 & 0.33 & **1.00** \\ \hline Failure to stop on demand & 1 & 0.00 & 1.00 & **1.00** \\ \hline High output & 1 & 0.00 & 1.00 & 1.00 \\ \hline Leaking & 3 & 0.67 & 0.86 & 1.00 \\ \hline Low output & 2 & 0.00 & 0.00 & 0.00 \\ \hline Minor in-service problems & 17 & 0.73 & 0.11 & **1.00** \\ \hline Other & 2 & **0.67** & 0.40 & 0.00 \\ \hline Overheating & 4 & 1.00 & 1.00 & 1.00 \\ \hline Plugged / choked & 6 & 0.67 & 0.25 & **1.00** \\ \hline Spurious stop & 1 & 0.00 & 0.00 & 0.00 \\ \hline Structural deficiency & 3 & 0.60 & 0.57 & **1.00** \\ \hline Vibration & 2 & 0.67 & 1.00 & 1.00 \\ \hline
**Micro-F1** & & 0.60 & 0.46 & **0.81** \\ \hline
**Macro-F1** & & 0.46 & 0.53 & **0.62** \\ \hline \end{tabular}
\end{table}
Table 3: A comparison of the Flair model (Stewart et al., 2022) and the GPT-3.5 LLMs (non-fine-tuned and fine-tuned) on the test dataset. Support is the number of times the label appears in the test dataset. The results of the top-performing model (when there are no ties) are in **bold**.
Finally, we note that the overall fine-tuning and inference process was fairly inexpensive, costing approximately $1 USD for each of our experiments. This shows that cost is not a barrier for achieving an acceptable level of performance on failure mode classification using LLMs.
## 4 Conclusion
In this paper we have demonstrated the use of Large Language Models (LLMs) to perform Failure Mode Classification (FMC). We have investigated the use of prompt engineering to determine the best prompt to feed in to an LLM, such as GPT-3.5, in order to perform FMC without any fine-tuning. However, we have also found that fine-tuning an LLM is necessary to obtain significantly better performance on FMC when compared to text classification models such as Flair. The fine tuning is performed using a relatively small, high quality, annotated data set.
The annotated data set we used for fine-tuning is publicly available. It maps observations to failure modes based on ISO 14224 classes. For the benefit of industry users wishing to use this fine-tuned data set on their own data, we note they will need to preprocess their maintenance work orders to extract observations. An example of a code pipeline to do this is in (Stewart et al., 2022).
One of the key drawbacks of OpenAI's LLMs is that to be able to fine-tune the models, one must upload potentially sensitive data to OpenAI's servers. This is a non-issue for companies with the capability to run and fine tune LLMs in their own secure environments, but presents complications for others. In light of this, in the future we aim to investigate the performance of offline large language models, such as LLaMA (Touvron et al., 2023), on failure mode classification. We also plan to explore how well the Flair-based model performs on this task when it is fed with GPT-based embeddings. Finally, we also plan to release a larger annotated dataset than the one proposed by (Stewart et al., 2022), which will enable further fine-tuning and improved evaluation quality.
#### Acknowledgments
This research is supported by the Australian Research Council through the Centre for Transforming Maintenance through Data Science (grant number IC180100030), funded by the Australian Government.
|
2307.16613 | Semiclassical approximation of the Wigner function for the canonical
ensemble | The Weyl-Wigner representation of quantum mechanics allows one to map the
density operator in a function in phase space - the Wigner function - which
acts like a probability distribution. In the context of statistical mechanics,
this mapping makes the transition from the classical to the quantum regimes
very clear, because the thermal Wigner function tends to the Boltzmann
distribution in the high temperature limit. We approximate this quantum phase
space representation of the canonical density operator for general temperatures
in terms of classical trajectories, which are obtained through a Wick rotation
of the semiclassical approximation for the Weyl propagator. A numerical scheme
which allows us to apply the approximation for a broad class of systems is also
developed. The approximation is assessed by testing it against systems with one
and two degrees of freedom, which shows that, for a considerable range of
parameters, the thermodynamic averages are well reproduced. | Marcos Gil de Oliveira, Alfredo Miguel Ozorio de Almeida | 2023-07-31T12:44:23Z | http://arxiv.org/abs/2307.16613v2 | # Semiclassical approximation of the Wigner function for the canonical ensemble
###### Abstract
The Weyl-Wigner representation of quantum mechanics allows one to map the density operator in a function in phase space -- the Wigner function -- which acts like a probability distribution. In the context of statistical mechanics, this mapping makes the transition from the classical to the quantum regimes very clear, because the thermal Wigner function tends to the Boltzmann distribution in the high temperature limit. We approximate this quantum phase space representation of the canonical density operator for general temperatures in terms of classical trajectories, which are obtained through a Wick rotation of the semiclassical approximation for the Weyl propagator. A numerical scheme which allows us to apply the approximation for a broad class of systems is also developed. The approximation is assessed by testing it against systems with one and two degrees of freedom, which shows that, for a considerable range of parameters, the thermodynamic averages are well reproduced.
**Keywords:** Weyl-Wigner representation, canonical ensemble, semiclassical approximations, Kerr system, Morse potential, Nelson potential
## 1 Introduction
Quantum and classical statistical mechanics differ both in their formulation and in their results. It is not by chance that the first evidence for quantum mechanics is the black-body spectrum derived by Planck [1]. The canonical ensemble, which describes a system in equilibrium with a thermal bath of temperature \(T\), is characterized classically through a probability distribution over phase space, the Boltzmann distribution
\[P_{\beta}(\mathbf{x})=\frac{1}{Z_{c}}e^{-\beta H_{c}(\mathbf{x})}, \tag{1}\]
where, \(\mathbf{x}=(p_{1},\ldots,p_{d},q_{1},\ldots,q_{d})\) is a point in the phase space spanned by the coordinates \(q_{j}\) and the momenta \(p_{j}\), \(H_{c}\) is the classical Hamiltonian of the system, \(Z_{c}\) is the classical partition function and \(\beta=1/kT\), \(k\) being the Boltzmann's constant. The _quantum_ canonical ensemble, on the order hand, is described by the thermal density operator
\[\hat{\rho}_{\beta}=\frac{1}{Z}e^{-\beta\hat{H}} \tag{2}\]
where \(\hat{H}\) is the Hamiltonian operator and \(Z\) is the quantum partition function. Both (1) and (2) allow one to calculate thermodynamic averages, and although the results agree for high temperatures, there is a considerable discrepancy for low ones. With the introduction, by Wigner, of his eponymous function [2], the differences between these two formulations diminished, as it allows one to map the thermal density operator in a function over phase space that works as if it were a probability distribution, though it strongly deviates from (1) in the low temperature regime. This proposal further evolved to give a complete formulation of quantum mechanics in phase space [3, 4], which we call the Weyl-Wigner representation. The high temperature limit of the resulting semiclassical approximation of the thermal Wigner function, coincides with the classical distribution (1).
A further advantage of the Weyl-Wigner formalism is that common observables with classical correspondence are directly represented by the classical phase space function, or a function that is semiclassically close to it. Thus, there is no limitation to Hamiltonians with a quadratic momentum dependence: any real phase space function will do. Moreover, the expectation of the observable is evaluated by a phase space integral, identical to its classical counterpart, except that the Liouville distribution is replaced by the Wigner function. In contrast, the phase space reflection operator,
\[\hat{R}_{\mathbf{x}}=\int\mathrm{d}^{N}\mathbf{\xi}_{\mathbf{q}}\ |\mathbf{q}+\mathbf{\xi}_{\mathbf{q}} \rangle\langle\mathbf{q}-\mathbf{\xi}_{\mathbf{q}}|\ \exp\left[-\frac{2i}{\hbar}\mathbf{\xi}_{\mathbf{p}}\cdot\mathbf{q} \right]. \tag{3}\]
which corresponds classically to the canonical reflection through a phase space point, is also a quantum observable. Indeed, this displaced parity operator has real eigenvalues \(\pm 1\), which makes it as quantum an observable as a spin. The essential role that this operator plays in the Wigner-Weyl representation, uncovered by Grossmann [5] and Royer [6] identifies its expectation with the Wigner function itself:
\[W(\mathbf{x})\equiv\frac{1}{(2\pi\hbar)^{N}}\ \mathrm{tr}\ \hat{\rho}\ \hat{R}_{\mathbf{x}}. \tag{4}\]
In short, the value of the Wigner function at every point in phase space supplies the expectation of the reflection operator for that point, which is exactly how it has been verified experimentally [7], by counting even and odd outcomes of phase space reflections on identically prepared states.
In this paper, we will explore the fact that, by evaluating a propagator \(\hat{U}_{t}=e^{-it\hat{H}/\hbar}\) at an imaginary time \(-i\theta\), where \(\theta=\beta\hbar\) is the _thermal time_, we obtain the operator \(\hat{U}_{-i\theta}=e^{-\beta\hat{H}}\), which is proportional to the thermal density operator (2). This is the so called Wick rotation [8, 9]. We will employ this relation, together with a semiclassical approximation for the propagator, which expresses it in terms of _classical_
trajectories, to obtain a semiclassical approximation for the canonical ensemble. In principle, it provides a powerful method for evaluating the thermal density operator at lower temperatures, even for many degrees of freedom, because classical trajectories are computed in parallel. The present initial exploration is limited to two degrees of freedom.
The complexification of the Hamiltonian to adapt it to a thermal, rather than a real evolution is already well established in semiclassical calculations, mainly within the chemical literature [10, 11, 12, 13, 14, 15, 16] and [17, 18]. Even though the various alternative propagators are also supported by trajectories in phase space, the end result is the position density matrix. Then a comparison with the classical distribution depends on Wigner's symmetrized Fourier transform over phase space. Furthermore, the complexification is confined to the momentum, which restricts the Hamiltonian to be the sum of a quadratic kinetic term with a potential energy, which excludes even a simple magnetic field. In contrast, the complexification employed here is, of need, much less simple (even in the case of the quadratic momentum dependence favoured by the position representation), so as to accommodate arbitrary Hamiltonians, for which the real time evolution is inaccessible to the differential Schrodinger equation. Thus, together with computational tests for standard Hamiltonians, we test the thermal averages of Birkhoff normal forms [19, 20], which include the quartic Kerr Hamiltonian [21]; in its turn, the unit cell of the many-body Bose-Hubbard Hamiltonian [22].
This paper is a followup on [23], where the core results of our current approach were first proposed. Here, we bridge the gaps that remained, which then allows us to devise a computational scheme that opens the possibility of applying our approximation to a vast number of cases. These new developments were achieved during a master's degree, and first appeared on the thesis [24].
The presentation is then structured as follows: in section 2 we discuss elements of the Weyl-Wigner representation and introduce a semiclassical approximation for
the propagator. In section 3 we particularize this discussion for the canonical ensemble, and show how the approximation for the propagator generates an approximation for the thermal density operator through the Wick rotation. In section 4, we apply our approximations for normal forms, which are a class of systems for which one has explicit expressions for the required quantities. In section 5 we reformulate the calculation of the trajectories in terms of a duplicated phase space, which is more amenable to a computational treatment, and develop a complete numerical method that allows us, in principle, to apply our approximation for systems with an arbitrary hamiltonian. In sections 6 and 7, we use this numerical scheme to apply the approximation to the Morse system, which has one degree of freedom, and to the Nelson system, which has two.
## 2 The Weyl-Wigner representation and
semiclassical approximations
The Weyl-Wigner representation of quantum mechanics is based on the reflection operators
\[\hat{R}_{\mathbf{x}}=\int\frac{d\mathbf{\xi}}{(4\pi\hbar)^{d}}\exp\left[\frac{i}{ \hbar}\mathbf{\xi}\wedge(\hat{\mathbf{x}}-\mathbf{x})\right], \tag{5}\]
which corresponds classically to the transformation \(R_{\mathbf{x}}:\mathbf{x}_{-}\mapsto 2\mathbf{x}-\mathbf{x}_{-}\). Here, \(\hat{\mathbf{x}}=(\hat{p}_{1},\ldots,\hat{p}_{d},\hat{q}_{1},\ldots,\hat{q}_{d})\) is a vector formed by the position and momentum operators and \(\wedge\) denotes the _wedge product_, defined by \(\mathbf{\xi}\wedge\mathbf{x}=(\mathbf{J}\mathbf{\xi})\cdot\mathbf{x}\), with
\[\mathbf{J}=\left(\begin{array}{c|c}\mathbf{0}&\mathbf{-I}_{d}\\ \mathbf{I}_{d}&\mathbf{0}\end{array}\right), \tag{6}\]
where \(\mathbf{I}_{d}\) denotes the \(d\times d\) identity matrix.
The Wigner symbol \(O(\mathbf{x})\) of an operator \(\hat{O}\) is then given by
\[O(\mathbf{x})=2^{d}\text{Tr}\ \left(\hat{O}\hat{R}_{\mathbf{x}}\right). \tag{7}\]
Furthermore, the Wigner function is a quantity proportional to the Wigner symbol of the density operator
\[W(\mathbf{x})=\frac{\rho(\mathbf{x})}{(2\pi\hbar)^{d}}=\frac{1}{(\pi\hbar)^{d}}\text{ Tr}\ \left(\hat{\rho}\hat{R}_{\mathbf{x}}\right) \tag{8}\]
and can be used to calculate quantum averages
\[\left\langle\hat{O}\right\rangle=\text{Tr}\ \left(\hat{\rho}\ \hat{O}\right)=\int d \mathbf{x}W\left(\mathbf{x}\right)O\left(\mathbf{x}\right) \tag{9}\]
as if it were a probability distribution.
As an example, we observe that the Wigner function for the eigenstates of the harmonic oscillator, defined by a hamiltonian \(\hat{H}=\omega\left(\hat{p}^{2}+\hat{q}^{2}\right)/2\), are given by
\[W_{n}(\mathbf{x})=\frac{(-1)^{n}}{\pi\hbar}e^{-\mathbf{x}^{2}/\hbar}L_{n}\left(\frac{ 2\mathbf{x}^{2}}{\hbar}\right), \tag{10}\]
where \(L_{n}\) is the \(n\)th Laguerre polynomial [3].
A striking feature of the Wigner representation is the fact that the Wigner symbol of operators of the form \(f\left(\hat{\mathbf{p}}\right)+g\left(\hat{\mathbf{q}}\right)\) is simply \(f\left(\mathbf{p}\right)+g\left(\mathbf{q}\right)\), which is exactly the corresponding classical variable. This is not a general result, as there can be corrections in form of power series of \(\hbar\). A useful formula for calculating more complicated Wigner
symbols is the Groenewold rule [25]
\[\begin{split} O_{2}\cdot O_{1}\left(\mathbf{x}\right)&=O_{2} \left(\mathbf{x}+\frac{i\hbar}{2}\mathbf{J}\frac{\partial}{\partial\mathbf{x}}\right)O_{1} \left(\mathbf{x}\right)\\ &=O_{1}\left(\mathbf{x}-\frac{i\hbar}{2}\mathbf{J}\frac{\partial}{ \partial\mathbf{x}}\right)O_{2}\left(\mathbf{x}\right).\end{split} \tag{11}\]
The Wigner symbol \(U_{t}(\mathbf{x})\) of the propagator
\[\hat{U}_{t}=e^{-it\hat{H}/\hbar} \tag{12}\]
is called the Weyl propagator, and, for short enough times, has the semiclassical approximation [25]
\[U_{t}(\mathbf{x})_{SC}=\left|\det\left(\mathbf{I}_{2d}\pm\mathbf{JB}_{t}\right)\right|^{1/2 }\exp\left[\frac{i}{\hbar}S_{t}(\mathbf{x})\right]. \tag{13}\]
Here, \(S_{t}(\mathbf{x})=S(\mathbf{x},t)\) is the so called (centre) action, and
\[\mathbf{B}_{t}=\frac{1}{2}\frac{\partial^{2}S_{t}}{\partial\mathbf{x}^{2}} \tag{14}\]
is proportional to its hessian. The action has the role of a generating function for the classical hamiltonian flow. It indirectly specifies the transformation \(\mathbf{x}_{-}\mapsto\mathbf{x}_{+}\) by giving the chord
\[\mathbf{\xi}=\mathbf{x}_{+}-\mathbf{x}_{-} \tag{15}\]
in terms of the centre
\[\mathbf{x}=\frac{\mathbf{x}_{-}+\mathbf{x}_{+}}{2} \tag{16}\]
through the relation
\[\mathbf{\xi}(\mathbf{x},t)=-\mathbf{J}\frac{\partial S_{t}}{\partial\mathbf{x}}. \tag{17}\]
As going back in time simply reverses the hamiltonian flow, we must have \(\boldsymbol{\xi}(\boldsymbol{x},t)=-\boldsymbol{\xi}(\boldsymbol{x},-t),\) from which we conclude that the centre action must be an odd function of \(t.\) Furthermore, Hamilton's equations
\[\dot{\boldsymbol{x}}=\boldsymbol{J}\frac{\partial H}{\partial\boldsymbol{x}} \tag{18}\]
imply that, for short times \(t,\) we have
\[\boldsymbol{\xi}\approx t\boldsymbol{J}\frac{\partial H}{\partial\boldsymbol{x }}, \tag{19}\]
from which we get
\[S(\boldsymbol{x},t)=-tH(\boldsymbol{x})+\mathcal{O}\left(t^{3}\right). \tag{20}\]
In general, the action is given by
\[S(\boldsymbol{x},t)=\Delta(\boldsymbol{x},t)-Et \tag{21}\]
where \(\Delta\) is the symplectic area of the region between the trajectory and the chord and \(E\) is the energy of the trajectory.
As an example for quadratic hamiltonians
\[H(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}\cdot\boldsymbol{\mathcal{H}}_{0} \boldsymbol{x} \tag{22}\]
the action is also quadratic and given by
\[S_{t}(\boldsymbol{x})=\boldsymbol{x}\cdot\boldsymbol{B}_{t}\boldsymbol{x}; \quad\boldsymbol{B}_{t}=\boldsymbol{J}\tanh\left(\frac{t}{2}\boldsymbol{J} \boldsymbol{\mathcal{H}}_{0}\right). \tag{23}\]
It is important to observe that, for this class of systems, the semiclassical approximations are actually exact [25]. For a harmonic oscillator with frequency \(\omega\), we have \(\mathbf{\mathcal{H}}_{0}=\omega\mathbf{I}_{2d}\), and we obtain an exact Weyl propagator
\[U_{t}(\mathbf{x})=\sec\left(\omega t/2\right)\exp\left[-\frac{i}{\hbar}\tan\left( \omega t/2\right)\mathbf{x}^{2}\right]. \tag{24}\]
We see that, when \(\omega t\rightarrow(2n+1)\pi\), we have
\[\left|\det\left(\mathbf{I}_{2d}\pm\mathbf{JB}_{t}\right)\right|^{1/2}=\sec\left( \omega t/2\right)\rightarrow\infty. \tag{25}\]
The set of points where this divergence occurs is called a caustic, and it signals the breakdown of the description of the canonical transformation by the centre generating function. For the harmonic oscillator, the hamiltonian flow in these instants is simply a reflection \(\mathbf{x}_{-}\mapsto\mathbf{x}_{+}=-\mathbf{x}_{-}\), and therefore, for every pair \((\mathbf{x}_{-},\mathbf{x}_{+})\) we get the same centre \(\mathbf{x}=\mathbf{0}\). In this case, then, the caustics are the entire phase space, but, for non quadratic hamiltonians, these divergences may be restricted to a lower dimensional sub-manifold. In general, after crossing a caustic, there may be more than one chord for each centre, and the approximation for the propagator becomes a sum of terms like (13), where one must include an extra _Maslov phase_ in the exponents [26, 27].
It is possible to show [25] that the jacobian
\[\mathbf{M}_{t}=\frac{\partial\mathbf{x}_{+}}{\partial\mathbf{x}_{-}} \tag{26}\]
of the hamiltoninan flow, which is a symplectic matrix [28], is related to \(\mathbf{B}_{t}\) through the Cayley parametrization [19]
\[\mathbf{M}_{t}=\frac{\mathbf{I}_{2d}-\mathbf{JB}_{t}}{\mathbf{I}_{2d}+\mathbf{JB}_{t}}, \tag{27}\]
allowing us to rewrite (13) as
\[U_{t}(\mathbf{x})_{SC}=\frac{2^{d}}{\left|\det\left(\mathbf{I}_{2d}+\mathbf{M}_{t}\right) \right|^{1/2}}\exp\left[\frac{i}{\hbar}S_{t}(\mathbf{x})\right], \tag{28}\]
which will be the most convenient form of the semiclassical propagator to work with.
## 3 The canonical ensemble in phase space
As mentioned in the introduction, this work is based on the evaluation of the semiclassical approximation for the propagator (28), at the imaginary time \(t=-i\theta\), \(\theta=\hbar\beta\) being the thermal time. We then obtain a semiclassical approximation for \(e^{-\beta\hat{H}}(\mathbf{x})\):
\[e^{-\beta\hat{H}}(\mathbf{x})_{SC}=\frac{2^{d}}{\left|\det\left(\mathbf{I}_{2d}+\mathbf{M }_{-i\theta}\right)\right|^{1/2}}\exp\left[\frac{1}{\hbar}S_{\theta}^{E}(\bm {x})\right], \tag{29}\]
where we have defined the euclidean action \(S_{\theta}^{E}=iS_{-i\theta}\), which is necessarily real, as \(S\) is an odd function of \(t\). For the harmonic oscillator, we get, using (24),
\[e^{-\beta\hat{H}}(\mathbf{x})=\mbox{sech}\left(\omega\theta/2\right)\exp\left[- \frac{1}{\hbar}\mbox{tanh}\left(\omega\theta/2\right)\mathbf{x}^{2}\right]. \tag{30}\]
It is interesting to note that, by using the the short time approximation (20) and setting \(\mathbf{M}_{t}\approx\mathbf{I}_{2d}\), we get
\[e^{-\beta\hat{H}}\left(\mathbf{x}\right)_{SC}\approx\exp\left[-\beta H\left(\mathbf{ x}\right)\right]\approx\exp\left[-\beta H_{c}\left(\mathbf{x}\right)\right], \tag{31}\]
that is, for high temperatures, we recover the _classical_ canonical ensemble.
In this framework, the thermodynamic expectation values
\[\left\langle\hat{O}\right\rangle=\mbox{Tr}\ \left(\hat{\rho}_{\beta}\hat{O} \right)=\frac{\mbox{Tr}\ \left(e^{-\beta\hat{H}}\hat{O}\right)}{\mbox{Tr}\ e^{-\beta\hat{H}}}, \tag{32}\]
are completely determined if one is able to calculate expressions of the form \(\mbox{Tr}\ \left(U_{t}\hat{O}\right)\) for imaginary \(t\), which has an approximation
\[\mbox{Tr}\ \left(\hat{U}_{t}\hat{O}\right)_{SC}=\frac{1}{(\pi\hbar)^{d}}\int d \mathbf{x}\frac{e^{iS_{t}(\mathbf{x})/\hbar}O(\mbox{\boldmath $x$})}{\left|\det\left[\mathbf{I}+\mathbf{M}_{t}\right] \right|^{1/2}}. \tag{33}\]
One of the first problems that appears when dealing with semiclassical approximations, and can already be seen in (33), is the fact that the relevant trajectories are specified by boundary-conditions -- in the case of the Wigner representation, we specify the centre \(x\) defined by the endpoints of the trajectory -- that can be satisfied by more than one orbit, and are much more difficult to solve than an initial value problem. This is the so called _root search problem_.
There are a few methods that can be used to circumvent this question, including the Initial and Final Value Representations [29]. Here, we briefly discuss a method that is specially adapted for he calculation of (33), which we call the _midpoint representation_, and consists of a mere change of variables, that we explain in what follows. We start with a point \(X\) in phase space -- the midpoint -- from which we propagate a trajectory \(\mathbf{x}_{+}(t)\), that evolves forward in time, and a trajectory \(\mathbf{x}_{-}(t)\), which evolves backwards, as illustrated in figure 1.
In other words, \(\mathbf{x}_{\pm}(t)\) satisfy the pair of initial value problems
\[\dot{\mathbf{x}}_{\pm}(t)=\pm\mathbf{J}\frac{\partial H}{ \partial\mathbf{x}};\quad\mathbf{x}_{\pm}(0)=\mbox{\boldmath $X$}. \tag{34}\]
From this trajectory, we construct a centre
\[\mathbf{x}(t)=\frac{\mathbf{x}_{+}(t/2)+\mathbf{x}_ {-}(t/2)}{2} \tag{35}\]
and a chord
\[\boldsymbol{\xi}(t)=\boldsymbol{x}_{+}(t/2)-\boldsymbol{x}_{-}(t/2). \tag{36}\]
Then, the transformation \(\boldsymbol{x}\mapsto\boldsymbol{X}\) has a jacobian determinant
\[\begin{split}\det\frac{\partial\boldsymbol{x}}{\partial \boldsymbol{X}}&=\frac{1}{2^{2d}}\det\left(\boldsymbol{M}_{t/2}+ \boldsymbol{M}_{-t/2}\right)\\ &=\frac{1}{2^{2d}}\det\left(\boldsymbol{I}_{2d}+\boldsymbol{M}_ {t}\right),\end{split} \tag{37}\]
where we have used the fact that symplectic matrices, such as \(\boldsymbol{M}\), have unit determinant, and that the composition law \(\boldsymbol{M}_{t_{2}}\boldsymbol{M}_{t_{1}}=\boldsymbol{M}_{t_{1}+t_{2}}\) holds. Performing this change of variables in (33), we arrive at
\[\text{Tr}\;\left(\hat{U}_{t}\hat{O}\right)_{SC}=\frac{1}{(2\pi\hbar)^{d}}\int d \boldsymbol{X}\left|\frac{\partial\boldsymbol{x}}{\partial\boldsymbol{X}} \right|^{1/2}e^{iS_{t}(\boldsymbol{x})/\hbar}O(\boldsymbol{x}). \tag{38}\]
Furthermore, in this framework, the area \(\Delta\) can be explicitly calculated as [23]
\[\Delta\left[\boldsymbol{x}\left(\boldsymbol{X}\right),t\right]=\int_{0}^{t/2} \boldsymbol{\xi}\left(\boldsymbol{X},t^{\prime}\right)\wedge\dot{\boldsymbol {x}}\left(\boldsymbol{X},t^{\prime}\right)dt^{\prime}, \tag{39}\]
Figure 1: Midpoint representation.
from which we obtain the action
\[S_{t}\left[\boldsymbol{x}\left(\boldsymbol{X}\right)\right]=\int_{0}^{t/2} \boldsymbol{\xi}\left(\boldsymbol{X},t^{\prime}\right)\wedge\dot{\boldsymbol{x }}\left(\boldsymbol{X},t^{\prime}\right)dt^{\prime}-tH\left(\boldsymbol{X} \right). \tag{40}\]
Beyond the fact that, now, all quantities can be determined by the initial value problem (34), we see that another advantage of this representation is the property that, at caustics, the integrand in (38) is now zero, instead of infinite, as was the case in expression (33).
## 4 Normal Forms
Here, we discuss a class of systems for which we are able to obtain explicit analytical results for the trajectories (34), which allow a direct evaluation of (38) at imaginary times.
A one dimensional _classical_ hamiltonian written as
\[\begin{split} H(\boldsymbol{x})&=\omega\left( \frac{p^{2}+q^{2}}{2}\right)+H_{2}\left(\frac{p^{2}+q^{2}}{2}\right)^{2}\\ &+H_{3}\left(\frac{p^{2}+q^{2}}{2}\right)^{3}+\cdots=F\left( \frac{\boldsymbol{x}^{2}}{2}\right)\end{split} \tag{41}\]
is said to be in Birkhoff normal form [19, 20]. Its orbits are circles in phase space, and, therefore, by a simple geometric argument, one may find explicit expressions for all the ingredients of the semiclassical approximation, as done in [23]. Defining the action variable
\[J=\frac{\boldsymbol{X}^{2}}{2} \tag{42}\]
and
\[\omega(J)=F^{\prime}(J), \tag{43}\]
we find the euclidean action
\[S_{\theta}^{E}\left[\boldsymbol{x}\left(\boldsymbol{X}\right)\right]=\left[\omega \theta-\sinh\left(\omega\theta\right)\right]J-\theta F\left(J\right), \tag{44}\]
the centre
\[\boldsymbol{x}\left(\boldsymbol{X},-i\theta/2\right)=\cosh\left(\frac{\omega \theta}{2}\right)\boldsymbol{X}, \tag{45}\]
and the jacobian determinant
\[\begin{split}&\det\frac{\partial\boldsymbol{x}}{\partial \boldsymbol{X}}(\boldsymbol{X},-i\theta/2)\\ &=\cosh^{2}\left(\frac{\omega\theta}{2}\right)\left[1+J\omega^{ \prime}\theta\tanh\left(\frac{\omega\theta}{2}\right)\right].\end{split} \tag{46}\]
_Quantum_ hamiltonians that can be written as
\[\hat{H}=G\left(\frac{\hat{p}^{2}+\hat{q}^{2}}{2}\right) \tag{47}\]
have a Wigner symbol that is a Birkhoff's normal form, but, except in the case of the harmonic oscillator, the function \(F\) in (41) _does not_ coincide with \(G\). Formulas for the calculation of the symbols are presented in A.
Some of the quantum properties of this class of systems are also readily obtained -- they share their eigenstates with the harmonic oscillator, and for a quantum normal form described by a hamiltonian \(G\left[\left(\hat{p}^{2}+\hat{q}^{2}\right)/2\right]\), the eigenenergies are simply \(G\left[\hbar\left(n+1/2\right)\right]\). This simplicity makes the comparison of our semiclassical approximation with the quantum result straightforward.
It is instructive to analyze the behaviour of our approximation in the low temperature limit. By squaring (45), we obtain
\[\frac{\boldsymbol{x}^{2}}{2}=J\cosh^{2}\left[\frac{\theta\omega\left(J\right)} {2}\right]. \tag{48}\]
Interpreting this equation as an implicit definition of \(J\left(\mathbf{x},\theta\right)\), we see that, for a fixed \(\mathbf{x}\), in order for the right side of the equality to remain finite, we must have
\[\lim_{\theta\rightarrow\infty}J\left(\mathbf{x},\theta\right)=0\text{ or }\lim_{\theta \rightarrow\infty}\omega\left[J\left(\mathbf{x},\theta\right)\right]=0. \tag{49}\]
Therefore, for systems that satisfy \(\omega\left(J\right)\neq 0\)\(\forall\)\(J\), we see that, according to (48), \(J\) has the asymptotic behaviour
\[J\approx\frac{\mathbf{x}^{2}}{2}\text{sech}^{2}\left[\frac{\theta\omega\left(0 \right)}{2}\right],\quad\theta\omega\left(0\right)\gg 1 \tag{50}\]
Substituting this expression in (44) and taking the limit \(\theta\rightarrow\infty\), we obtain, except for a constant term in \(\mathbf{x}\),
\[S_{\infty}^{E}\left(\mathbf{x}\right)=-\mathbf{x}^{2} \tag{51}\]
and the semiclassical approximation converges to the quantum result, that is, the semiclassical approximation for the thermal Wigner function tends to the Wigner function of the ground state of the harmonic oscillator, given in (10). Therefore, we see that, at least for normal forms with \(\omega\neq 0\), the semiclassical approximation is well anchored in both the high temperature limit, as it coincides with the classical result, and in the low temperature one, as it correctly predicts the ground state. It then remains to analyze its behaviour for intermediate temperatures.
### Kerr system
The simplest case, beyond the harmonic oscillator, of a system governed by a normal form is probably the Kerr system, whose Hamiltonian is
\[\hat{H}=\hbar\omega_{0}\left[\left(\frac{\hat{p}^{2}+\hat{q}^{2}}{2\hbar} \right)+\chi\left(\frac{\hat{p}^{2}+\hat{q}^{2}}{2\hbar}\right)^{2}\right], \tag{52}\]
where \(\chi>0\) is a dimensionless parameter and \(\omega_{0}>0\) is a frequency. This Hamiltonian models the propagation of light through a medium with cubic electric susceptibility [21]. The time evolution of coherent states under its action is known [30, 31], and the corresponding Wigner function has been experimentally measured [32]. This evolution has also been successfully simulated, in the case with \(\chi\rightarrow\infty\), utilizing semiclassical techniques [33]. A further point of interest is that the Hamiltonian for the Bose-Hubbard chain [22] in many-body physics can be considered as a coupling of Kerr oscillators, which highlights the importance of exploring semiclassical methods for Hamiltonians with non-quadratic momenta.
We note that, in this case, the Wigner symbol of the Hamiltonian only coincides with its classical counterpart within a constant term, as, with the aid of (11), one finds
\[H(p,q)=\hbar\omega_{0}\left[\left(\frac{p^{2}+q^{2}}{2\hbar}\right)+\chi \left(\frac{p^{2}+q^{2}}{2\hbar}\right)^{2}-\frac{\chi}{4}\right]. \tag{53}\]
Identifying
\[F(J)=\hbar\omega_{0}\left[\frac{J}{\hbar}+\chi\left(\frac{J}{\hbar}\right)^{ 2}-\frac{\chi}{4}\right], \tag{54}\]
we see that
\[\omega(J)=F^{\prime}(J)=\omega_{0}\left(1+\chi\frac{J}{\hbar}\right)\geq \omega_{0}>0 \tag{55}\]
and, therefore, we should expect a good result for low temperatures. Furthermore, when \(\chi\ll 1\), we also expect a good result, as, when \(\chi\to 0\), we recover the harmonic oscillator, for which the semiclassical approximation is exact. We also note that, because
\[\omega^{\prime}(J)=\chi\omega_{0}/\hbar>0, \tag{56}\]
one sees that \(\det\partial\mathbf{x}/\partial\mathbf{X}>0\), that is, there are no caustics for imaginary time.
In figure 2, we show the expectation value of the energy \(E\) as a function of thermal time \(\theta\) for different values of the parameter \(\chi\).
In the canonical ensemble, the specific heat \(c\) can be calculated in terms of the variance in the energy:
\[c=k\beta^{2}\left(\left\langle\hat{H}^{2}\right\rangle-\left\langle\hat{H} \right\rangle^{2}\right). \tag{57}\]
In this case, the Wigner symbol of the _square_ of the hamiltonian \(H^{2}\left(\mathbf{x}\right)\) is significantly different from \(\left[H\left(\mathbf{x}\right)\right]^{2}\). With this is mind, we show, in figure 3, the specific heat as a function of thermal time for different values of \(\chi\).
As one sees, the quality of our semiclassical approximation is heavily dependent on the parameter \(\chi\). As \(\chi\) increases, we see a deviation from the quantum results for intermediate values of \(\theta\), although, as foreseen, we have a good agreement at both high and low temperatures.
It may seem somewhat disturbing that the semiclassical approximation does not respect the positivity of the heat capacity: the figure cuts off the negative region. Yet it must be recalled that errors in \(\left\langle\hat{H}^{2}\right\rangle\) and \(\left\langle\hat{H}\right\rangle^{2}\) may be added, whereas the averages are subtracted. Indeed, Fig. 4 shows that the variance in (57) is considerably
Figure 2: Average energy for the Kerr system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result.
smaller than \(\left\langle\hat{H}^{2}\right\rangle\). In general one may then expect the heat capacity to be a much more stringent test of approximations than the energy average, as will occur further in our examples. One should note that a semiclassical version of the heat capacity as a second derivative of the partition function is not viable, as discussed in [23].
## 5 Double Phase Space
If we wish to apply our approximation for a broader class of systems, we must resort to numerical techniques. In this section, we restate the calculation of the terms in the integrand (38) in a way that is well adapted for a numerical solution, and, in the following sections, we apply this method for a few more systems.
Instead of obtaining the centre \(\mathbf{x}\) in equation (16) propagating two trajectories, one forward and one backwards in time, it will be easier to calculate a single forward trajectory in a double phase space. In this new space, the centre \(\mathbf{x}\) will play the role of
Figure 3: Specific heat for the Kerr system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result.
position, while the conjugate momentum is given by \(\mathbf{y}=\mathbf{J}\mathbf{\xi}.\) The double hamiltonian [34, 35, 36]
\[\begin{split}\mathbb{H}(\mathbf{x},\mathbf{y})&=H\left(\mathbf{x} -\frac{1}{2}\mathbf{J}\mathbf{y}\right)+H\left(\mathbf{x}+\frac{1}{2}\mathbf{J}\mathbf{y}\right)\\ &=H(\mathbf{x}_{+})+H(\mathbf{x}_{-}).\end{split} \tag{58}\]
will then give the correct equations of motion, as one may check:
\[\begin{split}\frac{\partial\mathbb{H}}{\partial\mathbf{x}}& =\nabla H(\mathbf{x}_{+})+\nabla H(\mathbf{x}_{-})\\ &=-\mathbf{J}\left(\dot{\mathbf{x}}_{+}-\dot{\mathbf{x}}_{-}\right)=-\mathbf{J} \dot{\mathbf{\xi}}=-\dot{\mathbf{y}};\end{split} \tag{59a}\] \[\begin{split}\frac{\partial\mathbb{H}}{\partial\mathbf{y}}& =\frac{\mathbf{J}}{2}\left[\nabla H(\mathbf{x}_{+})-\nabla H(\mathbf{x}_{-})\right]\\ &=\frac{1}{2}\left(\dot{\mathbf{x}}_{+}+\dot{\mathbf{x}}_{-}\right)=\dot {\mathbf{x}}.\end{split} \tag{59b}\]
For a given midpoint \(\mathbf{X}\), these equations must then be solved under the initial conditions
\[\mathbf{x}\left(\mathbf{X},0\right)=\mathbf{X},\quad\mathbf{y}\left(\mathbf{X},0\right)=\mathbf{0}. \tag{60}\]
In order to be able to define the trajectories for complex times, we simply promote the derivatives with respect to \(t\) in (59) to derivatives with respect to a complex number \(z\):
\[\frac{d\mathbf{y}}{dz} =-\frac{\partial\mathbb{H}}{\partial\mathbf{x}} \tag{61}\] \[\frac{d\mathbf{x}}{dz} =\frac{\partial\mathbb{H}}{\partial\mathbf{y}}\]
If the functions \(\mathbf{x}\) and \(\mathbf{y}\) defined by these equations turn out to be analytic, then the line integral
\[\Delta(z)=\int_{\gamma}\mathbf{y}\cdot\frac{d\mathbf{x}}{dz^{\prime}}\,dz^{\prime}, \tag{62}\]
which would be the extension of the area \(\Delta\), given in (39), for the complex plane, is only dependent on the endpoints of the path \(\gamma\), which are \(0\) and \(z\).
Assuming this is the case, we choose \(\gamma\) to be the easiest path that joins \(0\) and \(z\) -- the line segment -- and parameterise it by the arc-length \(s\). The explicit expression for the parametrization \(\gamma(s)\) is then
\[\gamma\colon[0,|z|] \to\mathbb{C}\] \[s \mapsto sw,\]
where \(w=z/|z|\). This path is illustrated in figure 5.
Now, we compose \(\mathbf{x}\) and \(\mathbf{y}\) with \(\gamma\), that is, we define the functions \(\tilde{\mathbf{x}}:[0,|z|]\to\mathbb{C},\ \tilde{\mathbf{x}}(s)=\mathbf{x}\circ\gamma(s)=\mathbf{x}(sw)\) and \(\tilde{\mathbf{y}}:[0,|z|]\to\mathbb{C},\ \tilde{\mathbf{y}}(s)=\mathbf{y}\circ\gamma(s)=\mathbf{y}(sw)\). By the
chain rule, we deduce that these functions satisfy
\[\begin{split}\frac{d\tilde{\mathbf{y}}}{ds}&=-w\frac{ \partial\mathbb{H}}{\partial\mathbf{x}},\\ \frac{d\tilde{\mathbf{x}}}{ds}&=w\frac{\partial\mathbb{H }}{\partial\mathbf{y}}\end{split} \tag{63}\]
which are _almost_ Hamilton's equations with real time \(s\). If we define \(\tilde{\mathbf{y}}_{w}=w^{*}\tilde{\mathbf{y}}\), where \({}^{*}\) denotes complex conjugation, and the modified hamiltonian
\[\begin{split}\mathbb{H}_{w}\left(\mathbf{x},\mathbf{y}\right)& =\mathbb{H}\left(\mathbf{x},w\mathbf{y}\right)\\ &=H\left(\mathbf{x}-\frac{w}{2}\mathbf{J}\mathbf{y}\right)+H\left(\mathbf{x}+ \frac{w}{2}\mathbf{J}\mathbf{y}\right)\end{split} \tag{64}\]
we, in fact, recover _proper_ Hamilton's equations
\[\begin{split}\frac{d\tilde{\mathbf{y}}_{w}}{ds}&=-\frac {\partial\mathbb{H}_{w}}{\partial\mathbf{x}}\\ \frac{d\tilde{\mathbf{x}}}{ds}&=\frac{\partial\mathbb{H }_{w}}{\partial\mathbf{y}}\end{split} \tag{65}\]
Figure 5: Line segment joining the origin \(0\) to \(z\).
although with a hamiltonian that is generally complex, but that reduces to a real function if \(w\in\{1,-1,i,-i\}\), which is the case for our interests (\(w=-i\)).
In terms of these quantities, the area is given by
\[\begin{split}\Delta(z)&=w\int_{0}^{|z|/2}ds\ \tilde{\mathbf{y}}_{w}(s)\cdot\frac{d\tilde{\mathbf{x}}(s)}{ds}\\ &=w\int_{0}^{|z|/2}ds\ \tilde{\mathbf{y}}_{w}(s)\cdot\left.\frac{ \partial\mathbb{H}_{w}}{\partial\mathbf{y}}\right|_{\tilde{\mathbf{x}}(s),\tilde{\mathbf{ y}}_{w}(s)}\end{split}, \tag{66}\]
which, alternatively, may be cast in the form of a initial value problem
\[\begin{split}\frac{d\Delta}{ds}&=w\tilde{\mathbf{y}}_{w} (s)\cdot\left.\frac{\partial\mathbb{H}_{w}}{\partial\mathbf{y}}\right|_{\tilde{\bm {x}}(s),\tilde{\mathbf{y}}_{w}(s)}\\ \Delta(0)&=0\end{split} \tag{67}\]
The last missing ingredient is a way to calculate the jacobian \(\partial\mathbf{x}/\partial\mathbf{X}\). It can be obtained by differentiating (65) with respect to \(\mathbf{X}\), which give us the equations
\[\frac{d}{ds}\frac{\partial\tilde{\mathbf{y}}_{w}}{\partial\mathbf{X}}=-\frac{\partial ^{2}\mathbb{H}_{w}}{\partial\mathbf{x}\partial\mathbf{y}}\frac{\partial\tilde{\mathbf{y}} _{w}}{\partial\mathbf{X}}-\frac{\partial^{2}\mathbb{H}_{w}}{\partial\mathbf{x}^{2}} \frac{\partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}; \tag{68a}\] \[\frac{d}{ds}\frac{\partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}=\frac{\partial^{2} \mathbb{H}_{w}}{\partial\mathbf{y}^{2}}\frac{\partial\tilde{\mathbf{y}}_{w}}{\partial \mathbf{X}}+\frac{\partial^{2}\mathbb{H}_{w}}{\partial\mathbf{x}\partial\mathbf{y}}\frac{ \partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}, \tag{68b}\]
solved under the initial conditions
\[\left.\frac{\partial\tilde{\mathbf{y}}_{w}}{\partial\mathbf{X}}\right|_{\theta=0}= \mathbf{0};\quad\left.\frac{\partial\tilde{\mathbf{x}}}{\partial\mathbf{X}}\right|_{ \theta=0}=\mathbf{I}. \tag{69}\]
In order to calculate (38) with \(t=-i\theta\), one must then solve the initial value problems specified by (65), (67) and (68) with \(w=-i\), and obtain the solution at \(s=\theta/2\). If we have \(d\) degrees of freedom, this will be a system of coupled differential
equations with \(8d^{2}+4d+1\)_real_ variables. For the remaining of this work, we will test the procedure here described for the cases \(d=1,2\).
It should be noted that the definition of a real double Hamiltonian in replacement of a complex Hamiltonian is not unique. Our construction, specially suited to the Wigner-Weyl representation, coincides neither with the double Hamiltonians designed for propagating coherent states by de Aguiar et al. [37], nor with the double Hamiltonian for the Boltzmann operator of Yan and Shao [10]. Indeed, the double trajectories in this reference decouple into pairs of simple trajectories, due to the assumption of a quadratic momentum dependence.
For each choice of decomplexification one obtains different trajectories in their own phase space. Unlike the simple equivalence of the various variants related directly by Fourier transforms, the SC equivalence for decomplexified Hamiltonians is, so far, a question which relies on numerical investigator. A mixed position-momentum representation is employed in [10] to derive their semiclassical approximation of the density matrix, but it requires a further Fourier transform (beyond the Wigner transform) to relate their phase space formulae to the thermal Wigner function. It is interesting that a composition of Herman-Kluck propagators evolving for positive and negative half-times is also proposed, but there it is only optional, in contrast to its essential role within the present more general theory,
## 6 Morse System
The Morse potential [38] is given by the expression
\[V(r)=D\left[1-e^{-a(r-r_{e})}\right]^{2}, \tag{70}\]
where \(r\) is a radial coordinate, \(D\) is the dissociation energy, \(r_{e}\) is the equilibrium distance and \(a\) is a constant with dimensions of inverse distance.
It is a model for the vibration of diatomic molecules, that takes into account the possibility of the dissociation of the bond.
As for every one dimensional system without an explicit dependence on time, the trajectories of a particle under the action of the Morse potential can be obtained by quadrature. We will explore a few insights that can be given by these expressions, but they are too cumbersome to actually use in the calculation of thermodynamic averages. For that, we will resort to the method described in the previous section.
In order to more easily describe these trajectories, it will be convenient to introduce the dimensionless coordinate \(q=a(r-r_{e})\) and the frequency
\[\omega=\sqrt{\frac{2Da^{2}}{m}}. \tag{71}\]
In this way, the lagrangian for this system is
\[L=\frac{m\dot{r}^{2}}{2}-V(r)=D\left[\left(\frac{\dot{q}}{\omega}\right)^{2}- \left(1-e^{-q}\right)^{2}\right], \tag{72}\]
Figure 6: Morse potential.
from which we obtain a conjugate momentum
\[p=\frac{\partial L}{\partial\dot{q}}=\frac{2D\dot{q}}{\omega^{2}} \tag{73}\]
and a hamiltonian
\[H=p\dot{q}-L=\frac{\omega^{2}p^{2}}{4D}+D\left(1-e^{-q}\right)^{2}. \tag{74}\]
The solutions of Hamilton's equations are then [39, 40]
\[\begin{split} q(t)&=\ln\left[\frac{1-\sqrt{\epsilon }\cos\left(\Omega t+\phi\right)}{1-\epsilon}\right];\\ p(t)&=\frac{2D\sqrt{\epsilon(1-\epsilon)}}{\omega }\frac{\sin\left(\Omega t+\phi\right)}{1-\sqrt{\epsilon}\cos\left(\Omega t+ \phi\right)}.\end{split} \tag{75}\]
Here
\[\epsilon=\frac{E}{D}=\left(\frac{\omega p}{2D}\right)^{2}+\left(1-e^{-q} \right)^{2}<1 \tag{76}\]
is the orbit's normalized energy, \(\Omega=\sqrt{1-\epsilon}\ \omega\) is the orbit's frequency and \(\phi\) is a phase determined by the initial conditions.
We see that, for \(t\in\mathbb{R}\), we have \(\sqrt{\epsilon}\cos\left(\Omega t+\phi\right)<1\), and the orbits are well behaved. Nonetheless, if we allow \(t\in\mathbb{C}\), we do not have this guarantee. Indeed, by taking \(t=-is,\ s\in\mathbb{R}\), and restricting ourselves to initial conditions of the form \(p_{0}=0\) and \(q_{0}<0\), which imply that \(\phi=0\), we obtain
\[\begin{split} q(t)&=\ln\left[\frac{1-\sqrt{\epsilon }\cosh\left(\Omega s\right)}{1-\epsilon}\right];\\ p(t)&=\frac{2D\sqrt{\epsilon(1-\epsilon)}}{\omega }\frac{i\sinh\left(\Omega s\right)}{1-\sqrt{\epsilon}\cosh\left(\Omega s \right)},\end{split} \tag{77}\]
and we see that the trajectories diverge at a finite critical time \(s_{c}\) that satisfies \(1-\sqrt{\epsilon}\cosh\left(\Omega s_{c}\right)=0\), or, choosing the positive solution, one arrives, explicitly, at
\[\omega s_{c}=\frac{1}{\sqrt{1-\epsilon}}\ln\left(\frac{1}{\sqrt{\epsilon}}+ \sqrt{\frac{1}{\epsilon}-1}\right). \tag{78}\]
According to figure 7, one sees that \(s_{c}(\epsilon)\) is a decreasing function that satisfies
\[\lim_{\epsilon\to 0}\omega s_{c}=\infty;\quad\lim_{\epsilon\to 1}\omega s_{c}=1. \tag{79}\]
In this way, we obtain divergent trajectories when \(\omega s>1\), which start at the region with \(\epsilon\to 1\) or, equivalently, with \(q_{0}\rightarrow-\ln 2\), and advance towards \(\epsilon=q_{0}=0\).
These divergent trajectories are, in a first moment, a disaster for our theory. Note that the integrand of (62) has a pole in this case, and therefore the integral is path dependent, which translates to multiple branches for the area \(\Delta\). It is then not clear which branch to choose. Beyond that, one may see, numerically, that these divergent trajectories are accompanied by the appearance of caustics. Furthermore, if one directly applies the method described in the previous section, it is possible to obtain
Figure 7: Critical time as a function of energy.
good results until a thermal time \(\theta=2\) (remember that the trajectories are evolved until \(s\) reaches \(\theta/2\)), when, suddenly, the approximation fails enormously. The culprit appears to be the fact that, after the caustic, the euclidean action rapidly grows from very negative to very positive values, which translates to a large growth in the integrand (38). The fact is that, while there is a good understanding of caustic traversals for real time [26], the same cannot be said for imaginary times. Note that even the change of coordinates \(\mathbf{x}\mapsto\mathbf{X}\) may fail, as the jacobian stops being invertible.
Nonetheless, the simple trick of _discarding_ the trajectories that cross caustics, imposing that they should not contribute to the integral (38), seems to completely eliminate our problem. This question certainly deserves further investigation, but, for now, we will stick to this _ad hoc_ trick, as it appears to work very well in practice.
The comparison with the quantum result is also straightforward for the Morse system, as its quantum version is well understood -- there are a finite number of bound eigenstates, whose eigenfunctions and eigenenergies are known [41]. By introducing the dimensionless parameter
\[\chi=\frac{\hbar\omega}{4D}, \tag{80}\]
we may write the eigenenergies corresponding to the bound states as
\[E_{n}=\hbar\omega\left[\left(n+\frac{1}{2}\right)-\chi\left(n+\frac{1}{2} \right)^{2}\right],\quad n=0,1,\ldots,N \tag{81}\]
where
\[N=\left[\frac{1}{2\chi}-\frac{1}{2}\right], \tag{82}\]
and denotes \(\lfloor x\rfloor\) the largest integer greater than or equal to \(x\).
In order for us to have an idea of the order of magnitude in physically relevant cases, we exhibit, on table 1, the values of \(\chi\) and \(N\) that best fit experimental results for molecules of hydrogen, oxygen and nitrogen.
This system has the peculiarity of presenting both bound and free states. In this scenario, one often is only preoccupied with the regime in which the temperature is bellow the value of excitation for the free states and, therefore, a regime in which they will not contribute to the thermodynamics of the system. In this case, the classical prescription is to only perform the integrations defining the thermodynamic quantities in the region of phase space corresponding to bound states [43]. We will stick with this classical prescription in the semiclassical case, therefore restricting the integration region in (38).
\begin{table}
\begin{tabular}{c c c} \hline \hline Molecule & \(\chi\) & \(N\) \\ \hline \(H_{2}\) & \(2.76\times 10^{-2}\) & 17 \\ \(O_{2}\) & \(7.58\times 10^{-3}\) & 65 \\ \(N_{2}\) & \(6.07\times 10^{-3}\) & 81 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Values of \(\chi\) and \(N\) for some molecules. Calculated from [42].
Figure 8: Average energy for the Morse system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result.
We then repeat the analysis done for the Kerr system, by calculating the energy and the heat capacity given by our approximations, and comparing them with the classical and the quantum cases. The results can be seen in figures 8 and 9.
The values given by our approximation are quite remarkable, specially in the case of the energy, where one sees almost no deviation from the quantum result. We note that, for \(\chi=0.12\), we have only four bound states. The values of the heat capacity are less accurate, as also happened in the Kerr system, but are still far superior to the classical case. We also note that the approximation seems to fare better the lower the value of \(\chi\). This may be related to the fact that, when \(\chi\to 0\), we recover the spectrum of the harmonic oscillator, as can be seen from (81).
Because of the change of variables \(\mathbf{x}\mapsto\mathbf{X}\), which results in formula (38), we only have access to a displacement of the thermal Wigner function. Nonetheless, remembering that, according to (8), the Wigner function \(W(\mathbf{x}^{\prime})\) is proportional to \(\left\langle\hat{R}_{\mathbf{x}^{\prime}}\right\rangle\)
Figure 9: Specific heat for the Morse system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result.
and using the fact that \(\hat{R}_{\mathbf{x}^{\prime}}(\mathbf{x})=\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)\), we may write
\[\begin{split} W(\mathbf{x}^{\prime})&\propto\int d\mathbf{X} \left|\frac{\partial\mathbf{x}}{\partial\mathbf{X}}\right|^{1/2}\exp\left[\Delta(\mathbf{X} )/\hbar-\beta H(\mathbf{X})\right]\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)\\ &=\left.\left|\frac{\partial\mathbf{x}}{\partial\mathbf{X}}\right|^{1/2} \exp\left[\Delta(\mathbf{X})/\hbar-\beta H(\mathbf{X})\right]\right|_{\mathbf{X}=\mathbf{X}^{ \prime}}\end{split} \tag{83}\]
where \(\mathbf{X}^{\prime}\) is the midpoint which gets mapped to \(\mathbf{x}^{\prime}\) under equations (65). One may also characterize \(\mathbf{X}^{\prime}\) as the zero of the function
\[f(\mathbf{X})=\mathbf{x}\left(\mathbf{X}\right)-\mathbf{x}^{\prime}. \tag{84}\]
Assuming that no caustics have been traversed, this zero is unique, and may be found by a standard Newton-Raphson method, allowing us to calculate \(W(\mathbf{x}^{\prime})_{SC}\) as well as the marginal distributions
\[W(p) =\int dqW(p,q) \tag{85a}\] \[W(q) =\int dpW(p,q) \tag{85b}\]
which are the expectation values of the operators \(\ket{p}\bra{p}\) and \(\ket{q}\bra{q}\), respectively. These results for the Morse system, with a thermal time \(\omega\theta=3\) and \(\chi=0.01\) are shown in figure 10. We also show the quantum projections, as well as the classical one, which is obtained from the classical Boltzmann distribution.
It is possible to see that our semiclassical approximation for \(W(p)\) is essentially exact, while \(W(p)\) appears to be a displaced version of its quantum counterpart.
## 7 Nelson System
The Nelson system [44] is described by a hamiltonian
\[H(\mathbf{x})=\frac{1}{2}\left(p_{x}^{2}+p_{y}^{2}\right)+V(x,y) \tag{86}\]
which represents a particle of unit mass in two dimensions under the action of the Nelson potential
\[V(x,y)=(x^{2}/2-y)^{2}+\mu x^{2} \tag{87}\]
where \(\mu\) is a parameter. The potential is illustrated in figure 11.
The classical dynamics in real time exhibits a generic mixture of stable regions within a chaotic sea as exemplified by its Poincare section, visualized in Figure 12. Bifurcation trees of its periodic orbits have been intensively studied in [45, 46, 47]. The important point to be borne in mind is that the range between regular (integrable) and fully chaotic classical motion pertains to the infinite time limit. For finite time, the solutions of the Hamilton-Jacobi equation, which are the backbone of semiclassical approximations of finite time quantum evolution, make no qualitative distinction between these alternatives. In any case, it is reassuring that our full double hamiltonian formalism is successful even for the thorniest types of generic mixed systems.
Figure 10: The Semiclassical Wigner function is shown in the heat map. Different versions of the projections \(W(p)\) and \(W(q)\) are shown: Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result, obtained using the classical Boltzmann distribution.
Here, we simply use the Nelson system as an example of an application of our methods for a nontrivial system in two degrees of freedom. We then consider an ensemble of particles of unit mass under the action of this potential and calculate the thermodynamic averages associated with this system.
In this case, there are no analytical formulas available, and we must resort to numerical techniques in order to calculate the quantum energy spectrum. Because of this limitation, we only have access to a finite number of levels, and only the lower ones are reliable. This, in turn, will only give a reliable approximation for the thermodynamic quantities in the low temperature limit. On the other hand, the classical framework, as it is known, will give good results in the high temperature limit. Our hope is that our semiclassical approximation can join these two extremes in a satisfactory manner.
The semiclassical approximation for the energy, shown in figure 13, seems to bridge very well the high temperature limit, given by the classical result, and the low temperature one, given by the quantum result. Unfortunately, the approximation for the
heat capacity seems to fail for much smaller values of \(\theta\), specially for \(\mu=0.5\), as can be seen in figure 14.
## 8 Discussion
Having reviewed and incremented the semiclassical approximation for the thermal Wigner function, we developed it into a numerical method that can be applied to a broad class of systems, including Hamiltonians that are not quadratic in their momenta. Even though further investigation will be required, we obtained good agreement of energy averages with the quantum results for a wide range of different systems and parameters. It is presumed that that this method can be useful for systems with
Figure 12: Poincaré section for the Nelson system with \(\mu=2\) and energy \(E=4.8\) with respect to the hyperplane \(y=0\). Each color represents a different trajectory.
many degrees of freedom, as its quadratic scaling law can keep computation tractable even for high dimensions.
The Weyl propagator employed here belongs to the class of propagators related to the original Van Vleck propagator by various Fourier transforms: All of these require the so called 'root search'. Thus, whereas one seeks trajectories with given end positions for the Van Vleck propagator, it is the centre point between the extremities of the trajectory that is prescribed in the Wigner-Weyl representation. So it is only for integrals involving the semiclassical Weyl propagator (or its Wick rotation) that one can switch from the centre to the initial or final value of the trajectory (as in [29]) or to its midpoint, in the present instance.
Each representation illuminates a different aspect of quantum mechanics. Thus, the position representation, so far favoured by the vast majority of computations in this field [10], can easily provide the probability density for positions, that is, the diagonal matrix elements of the density matrix are supplied explicitly. On the other hand, the
Figure 13: Average energy for the Nelson system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result.
momentum density requires a double Fourier transform over the pair of positions. In contrast, the Wigner function provides either density by a simple projection integral [25], as exemplified by the momentum and position densities for the Morse system, shown in Fig. 10.
A further unique feature of the Wigner-Weyl representation is its symplectic invariance [26], that is, unitary quantum (metaplectic) transformations corresponding to classical linear canonical phase space transformations transport the Wigner function classically. Thus, the Wigner function supplies through simple projection integrals, not only the momentum probability density and the position probability density, but the probability density along any Lagrangian plane in phase space, that is, any plane where the action for any closed circuit is null. This multiple probability content of the density operator can be used to reconstruct it through multiple measurements in the process of quantum tomography [48], but prior knowledge of the thermal Wigner function is welcome shortcut.
Figure 14: Specific heat for the Nelson system as a function of thermal time. Solid blue line: quantum result; Orange markers: semiclassical approximation; Green dashed line: Classical result.
## Acknowledgments
We thank Gabriel Lando for his advice on the numerics. Funding provided by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) and Instituto Nacional de Ciencia e Tecnologia de Informacao Quantica is gratefully acknowledged.
## Declarations
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript
## Data Availability
The code used in this article can be found at [49].
## Appendix A Wigner symbol of normal forms
In order to calculate the Wigner symbol of an operator of the form (47), it is sufficient to do so for the monomials \(\hat{o}^{n}\), where \(\hat{o}=\hat{p}^{2}+\hat{q}^{2}\). Our strategy will consist in finding a recurrence relation that allows us to calculate \(o^{n+1}(p,q)\) in terms of \(o^{n}(p,q)\). As the initial term \(o^{1}(p,q)=o(p,q)=p^{2}+q^{2}\) is readily obtained, the problem is solved.
For that, we first observe that, as \(\hat{o}^{n}\) is hermitian, \(o^{n}(p,q)\) must be real. Using this fact, writing \(\hat{o}^{n+1}=\hat{o}\hat{o}^{n}\), and applying Groenewold's rule (11), we arrive at the recurrence relation
\[o^{n+1}(p,q)=\left(p^{2}+q^{2}-\frac{\hbar^{2}}{4}\nabla^{2}\right)o^{n}(p,q)\] (A1)
where \(\nabla^{2}=\partial_{p}^{2}+\partial_{q}^{2}\). This relation is further simplified if we introduce the coordinates \(s,\phi\), defined by \(p=\sqrt{s}\cos\phi,\ q=\sqrt{s}\sin\phi\), in terms of which the laplacian takes the
form
\[\nabla^{2}=4\left(s\partial_{s}^{2}+\partial_{s}\right)+\frac{1}{s}\partial_{\phi }^{2},\] (A2)
which allows us to rewrite (A1) as
\[o^{n+1}(s,\phi)=\left[s-\hbar^{2}\left(s\partial_{s}^{2}+\partial_{s}+\frac{1} {4s}\partial_{\phi}^{2}\right)\right]o^{n}(s,\phi)\] (A3)
Since \(\partial_{\phi}o(s,\phi)=0\), and, as deduced from the recurrence relation, \(\partial_{\phi}o^{n}(s,\phi)=0\Rightarrow\partial_{\phi}o^{n+1}(s,\phi)=0\), we prove by induction that \(\partial_{\phi}o^{n}(s,\phi)=0\)\(\forall\)\(n\), which eliminates the derivative with respect to \(\phi\) from (A3). This allows us to easily obtain the first terms in the recurrence relation, which, already expressed in terms of \(p,q\), are given by
\[o^{2}(p,q) =\left(p^{2}+q^{2}\right)^{2}-\hbar^{2}\] (A4) \[o^{3}(p,q) =\left(p^{2}+q^{2}\right)^{3}-5\hbar^{2}\left(p^{2}+q^{2}\right)\] \[o^{4}(p,q) =\left(p^{2}+q^{2}\right)^{4}-14\hbar^{2}\left(p^{2}+q^{2}\right) ^{2}+5\hbar^{4}\]
We see that, in general, \(\hat{o}^{n}(p,q)\) is a polynomial of order \(n\) in \((p^{2}+q^{2})\), whose dominant term is \((p^{2}+q^{2})^{n}\), while corrections proportional to even powers of \(\hbar\) are also present.
## Appendix B Numerical Details
The calculations in this article were performed using the Julia language [50]. The package DifferentialEquations.jl [51] was used to solve the necessary differential equations in parallel. The calculations were performed on a 12th Gen Intel Core i5-12600K processor, which has 16 threads.
### Morse System
The integrals related to the Morse system were performed using Gaussian quadrature. The integration region, in units of \(\omega=\hbar=1\), is given by
\[R=\left\{(p,q)\in\mathbb{R}^{2}\;\;\left|\;\chi p^{2}+\frac{1}{4\chi}\left(1-e^{- q}\right)^{2}<\frac{1}{4\chi}\right.\right\}\] (B5)
Introducing the variables \(\tilde{P}=2\chi p\) e \(Q=1-e^{-q}\), we obtain
\[\begin{split} R&=\left\{\left(\tilde{P},Q\right) \in\mathbb{R}^{2}\;\;\left|\;\tilde{P}^{2}+Q^{2}<1\right.\right\}\\ &=\left\{\left(\tilde{P},Q\right)\in\mathbb{R}^{2}\;\;\left|\;Q \in(-1,1)\,;\;\tilde{P}\in\left(-\sqrt{1-Q^{2}},\sqrt{1-Q^{2}}\right)\right. \right\},\end{split}\] (B6)
which can be simplified by defining \(P=\tilde{P}/\sqrt{1-Q^{2}}\). The, we have
\[R=\left\{\left(P,Q\right)\in\mathbb{R}^{2}\;\;\left|\;Q\in(-1,1)\,;\;P\in(-1,1 )\,\right.\right\}.\] (B7)
The inverse transformation is then
\[\begin{cases}p=\sqrt{1-Q^{2}}\frac{P}{2\chi}\\ q=-\ln\left(1-Q\right)\end{cases},\] (B8)
which has jacobian determinant
\[\det\frac{\partial(p,q)}{\partial(P,Q)}=\frac{1}{2\chi}\sqrt{\frac{1+Q}{1-Q}},\] (B9)
which is proportional to the weight function of a Gauss-Chebyshev quadrature of the \(3^{\mathbf{0}}\) kind. We therefore use this quadrature rule to perform the integration over the \(Q\) coordinate, while a Gauss-Legendre quadrature is used to integrate over \(P\). The
advantage of Gaussian quadrature is that the integration points will be independent of \(\theta\), and then a single set of points can be used to compute the thermodynamic quantities over a range of temperatures. In this work, we used a grid of \(300\times 300\) points to perform the integration, which corresponds to \(9\times 10^{4}\) trajectories.
### Nelson System
In the semiclassical calculations for the Nelson system, different techniques were used for different set of parameters.
In the case of the energy, as well as the heat with \(\mu=1.5,2\), we first performed the change of variables \((p_{x},p_{y},x,y)\mapsto(P_{X},P_{y},X,Y)\) with
\[\begin{cases}P_{x}=\sqrt{\frac{\theta}{2}}p_{x}\\ P_{y}=\sqrt{\frac{\theta}{2}}p_{y}\\ X=\sqrt{\theta\mu}x\\ Y=\sqrt{\theta}(y-x^{2}/2)\end{cases}.\] (B10)
This transformation has unit jacobian determinant and, in terms of the new variables, we have that the classical Boltzmann's weight is simply
\[e^{-\beta H}=\exp\left[-\left(P_{x}^{2}+P_{y}^{2}+X^{2}+Y^{2}\right)\right].\] (B11)
The integration is then performed by an h-adaptive technique as described in [52, 53]. The Julia implementation can be found in [54]. We bounded the integration algorithm to use roughly \(10^{5}\) integration points. We used the BS3 [55, 56] and Vern6 [56, 57] algorithms to solve the differential equations, and the tolerances varied between \(10^{-2}\) and \(10^{-6}\). For each \(\mu\), the corresponding plot took around 40 seconds to 3 minutes to complete.
We found that the heat capacities with \(\mu=0.5,1\) were much harder to integrate. In this case, we didn't perform a change of variables and resorted to a Monte Carlo integration method, where the \(10^{7}\) integration points were sampled from the _classical_ Boltzmann's distribution \(e^{-\beta H(\mathbf{x})}/Z\) using the Metropolis-Hastings algorithm [58, 59]. In this case, for each \(\mu\), the corresponding plot took around 3 hours to complete.
For the energy spectrum, which is used to calculate the quantum versions of the thermodynamic quantities, we used a grid of \(160\times 160\) points, where \(x\) spanned from \(-4.5\) to \(4.5\), and \(y\) spanned from \(-4\) to \(5\). We then approximated the laplacian of the time independent Schrodinger equation through a finite differences matrix over this grid. The discretizatation of this equation gives rise to a eigenvalue equation, which can be solve through standard linear algebra libraries.
|
2309.04963 | Packings in bipartite prisms and hypercubes | The $2$-packing number $\rho_2(G)$ of a graph $G$ is the cardinality of a
largest $2$-packing of $G$ and the open packing number $\rho^{\rm o}(G)$ is the
cardinality of a largest open packing of $G$, where an open packing (resp.
$2$-packing) is a set of vertices in $G$ no two (closed) neighborhoods of which
intersect. It is proved that if $G$ is bipartite, then $\rho^{\rm o}(G\Box K_2)
= 2\rho_2(G)$. For hypercubes, the lower bounds $\rho_2(Q_n) \ge 2^{n - \lfloor
\log n\rfloor -1}$ and $\rho^{\rm o}(Q_n) \ge 2^{n - \lfloor \log (n-1)\rfloor
-1}$ are established. These findings are applied to injective colorings of
hypercubes. In particular, it is demonstrated that $Q_9$ is the smallest
hypercube which is not perfect injectively colorable. It is also proved that
$\gamma_t(Q_{2^k}\times H) = 2^{2^k-k}\gamma_t(H)$, where $H$ is an arbitrary
graph with no isolated vertices. | Boštjan Brešar, Sandi Klavžar, Douglas F. Rall | 2023-09-10T08:40:38Z | http://arxiv.org/abs/2309.04963v1 | # Packings in bipartite prisms and hypercubes
###### Abstract
The 2-packing number \(\rho_{2}(G)\) of a graph \(G\) is the cardinality of a largest 2-packing of \(G\) and the open packing number \(\rho^{\rm o}(G)\) is the cardinality of a largest open packing of \(G\), where an open packing (resp. 2-packing) is a set of vertices in \(G\) no two (closed) neighborhoods of which intersect. It is proved that if \(G\) is bipartite, then \(\rho^{\rm o}(G\,\square\,K_{2})=2\rho_{2}(G)\). For hypercubes, the lower bounds \(\rho_{2}(Q_{n})\geq 2^{n-\lfloor\log n\rfloor-1}\) and \(\rho^{\rm o}(Q_{n})\geq 2^{n-\lfloor\log(n-1)\rfloor-1}\) are established. These findings are applied to injective colorings of hypercubes. In particular, it is demonstrated that \(Q_{9}\) is the smallest hypercube which is not perfect injectively colorable. It is also proved that \(\gamma_{t}(Q_{2^{k}}\times H)=2^{2^{k}-k}\gamma_{t}(H)\), where \(H\) is an arbitrary graph with no isolated vertices.
\({}^{a}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia
\({}^{b}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia
\({}^{c}\) Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
\({}^{d}\) Department of Mathematics, Furman University, Greenville, SC, USA
**Keywords:** 2-packing number, open packing number, bipartite prism, hypercube, injective coloring, (total) domination number
**AMS Subj. Class. (2020)**: 05C69, 05C76
## 1 Introduction
For many reasons, hypercubes are ubiquitous in theoretical computer science and in combinatorics. Understanding their structure is therefore a fundamental problem. Although hypercubes have a seemingly simple structure, we quickly encounter very complex problems. For instance, one of them was the middle levels problem, which was successfully dismissed [15]. On the other hand, the problem of determining the
domination number of hypercubes is beyond the reach of existing methods. To date, exact values of \(\gamma(Q_{n})\) are only known for \(n\leq 9\), where the value \(\gamma(Q_{9})=62\) was obtained in [16], and for the following two infinite families.
**Theorem 1.1**.: ([7, 19]) _If \(k\geq 1\), then \(\gamma(Q_{2^{k}-1})=2^{2^{k}-k-1}\) and \(\gamma(Q_{2^{k}})=2^{2^{k}-k}\)._
The values \(\gamma(Q_{2^{k}-1})=2^{2^{k}-k-1}\) can be obtained from the fact that hypercubes \(Q_{2^{k}-1}\) admit 1-perfect codes, in which case the domination number coincides with the cardinality of a 1-perfect code.
The most important variation of the domination number is the total domination number; see a recent monograph [8] surveying domination theory with the two invariants in the central role. Roughly speaking, domination operates with closed neighborhoods while total domination with open neighborhoods, which often causes a different behavior of the invariants. However, as proved in [1] by using hypergraph transversals, \(\gamma_{t}(Q_{n+1})=2\gamma(Q_{n})\) for all \(n\), which makes the domination number and the total domination number in hypercubes tightly connected. More generally, the authors of [1] proved that \(\gamma_{t}(G\,\square\,K_{2})=2\gamma(G)\) as soon as \(G\) is a bipartite graph.
The concepts of packing number and open packing number of a graph are often used in domination theory, since they present natural lower bounds on the domination number and the total domination number, respectively, of the graph. The concept of packing was used back in 1975 by Meir and Moon in their classical theorem stating that in a tree the domination number equals the packing number [11]. On the other hand, open packing was introduced by Henning and Slater [9], and was later used in [18] to prove a canonical formula for the total domination number of the direct product of two graphs, which holds if one of the factors has the total domination number equal to its open packing number. Similarly as total domination is related to domination, open packing can be regarded as a version of packing in which closed neighborhoods are replaced with open neighborhoods. See [12, 13, 14] for some recent studies of (open) packings as well as [5] for their application.
Open packings are also related to the so-called injective colorings of graphs, cf. [17]. More precisely, an injective coloring of a graph is exactly a partition of its vertex set into open packings. In a recent paper [3], graphs that admit injective colorings such that each of the color classes is a maximum open packing were considered. While proving this property for hypercubes of some small dimensions, it was also proved for those whose dimension is a power of 2. Yet, nothing else was known, including whether there exists a hypercube that does not satisfy this property. One of the reasons for the difficulty of this question is that the open packing number (i.e., the cardinality of a maximum open packing) has not been known.
We proceed as follows. In the remainder of this introduction, we provide the definitions and concepts we need for the following. In Section 2 we prove that the
open packing number of a prism over a bipartite graph \(G\) is twice the \(2\)-packing number of \(G\). This result nicely complements [1, Theorem 1] which states that the total domination number of a prism over a bipartite graph \(G\) is twice the domination number of \(G\). We also demonstrate that in general, the open packing number of a prism over a graph \(G\) can be arbitrary larger that the \(2\)-packing number of \(G\). In Section 3 we prove lower bounds on the \(2\)-packing number and the open packing number of hypercubes. The bounds are sharp for small dimensions and for two infinite families, but are not sharp in general. In the subsequent section we apply these findings to injective colorings of hypercubes. In particular we demonstrate that \(Q_{9}\) is the smallest hypercube which is not perfect injectively colorable. In the concluding remarks, we give an overview of the known values for the hypercube invariants considered here and also derive the total domination number of the direct product of \(Q_{2^{k}}\) and an arbitrary graph.
### Preliminaries
Let \(G=(V(G),E(G))\) be a graph and \(x\in V(G)\). The _open neighborhood_\(N(x)\) is the set of vertices adjacent to \(x\) and the _closed neighborhood_ is \(N[x]=N(x)\cup\{x\}\). A set \(D\subseteq V(G)\) is a _dominating set_ of \(G\) if each vertex of \(V(G)\setminus D\) has a neighbor in \(D\). The cardinality of a smallest dominating set of \(G\) is the _domination number_\(\gamma(G)\) of \(G\). Similarly, \(D\subseteq V(G)\) is a _total dominating set_ of \(G\) if each vertex of \(V(G)\) has a neighbor in \(D\). The cardinality of a smallest dominating set of \(G\) is the _total domination number_\(\gamma_{t}(G)\) of \(G\).
Let \(X\subseteq V(G)\). Then \(X\) is a \(2\)_-packing_ of \(G\) if \(N[x]\cap N[y]=\emptyset\) for every pair of distinct vertices \(x,y\in X\). Similarly, if \(N(x)\cap N(y)=\emptyset\) for every pair of distinct vertices \(x,y\in X\), then \(X\) is an _open packing_ of \(G\). The cardinality of a largest \(2\)-packing of \(G\) is the \(2\)_-packing number_\(\rho_{2}(G)\) of \(G\) and the cardinality of a largest open packing of \(G\) is the _open packing number_\(\rho^{\circ}(G)\) of \(G\). By a \(\rho_{2}\)_-set_ of \(G\) we mean a \(2\)-packing of \(G\) of cardinality \(\rho_{2}(G)\). A \(\rho^{\circ}\)_-set_ of \(G\) is defined analogously.
If \(X\) is a \(2\)-packing such that \(V(G)=\cup_{x\in X}N[x]\) then we say that \(X\) is a \(1\)_-perfect code_ of \(G\). In domination theory, \(1\)-perfect codes are known as _efficient dominating sets_, see [8, Chapter 9] and [10]. Since \(\gamma(G)\geq\rho_{2}(G)\) for every graph \(G\), if \(X\) is a \(1\)-perfect code of \(G\), then \(X\) is also a dominating set of \(G\). This observation leads to the following well known fact.
**Proposition 1.2**.: _If \(G\) admits a \(1\)-perfect code, then \(\gamma(G)=\rho_{2}(G)\). If in addition \(G\) is \(r\)-regular, then \(\gamma(G)=\rho_{2}(G)=\frac{n(G)}{r+1}\)._
The _Cartesian product_\(G\,\square\,H\) of graphs \(G\) and \(H\) is the graph whose vertex set is \(V(G)\times V(H)\), and two vertices \((g_{1},h_{1})\) and \((g_{2},h_{2})\) are adjacent in \(G\,\square\,H\) if
either \(g_{1}=g_{2}\) and \(h_{1}h_{2}\) is an edge in \(H\) or \(h_{1}=h_{2}\) and \(g_{1}g_{2}\) is an edge in \(G\). For a vertex \(g\) of \(G\), the subgraph of \(G\,\square\,H\) induced by the set \(\{(g,h):\,h\in V(H)\}\) is an \(H\)_-fiber_ and is denoted by \({}^{g}\!H\). Similarly, for \(h\in H\), the \(G\)_-fiber_, \(G^{h}\), is the subgraph induced by \(\{(g,h):\,g\in V(G)\}\). Cartesian product is commutative and associative. The _hypercube_ of dimension \(n\), or the \(n\)_-cube_, is isomorphic to \(K_{2}\,\square\,\cdots\,\square\,K_{2}\), where there are \(n\) factors \(K_{2}\), and is denoted by \(Q_{n}\). The equality \(Q_{n}=Q_{n-1}\,\square\,K_{2}\) will be used (at least implicitly) several times in the paper. Finally, the _direct product_\(G\times H\) of graphs \(G\) and \(H\) has the vertex set \(V(G)\times V(H)\), and two vertices \((g_{1},h_{1})\) and \((g_{2},h_{2})\) are adjacent in \(G\times H\) if \(g_{1}g_{2}\) is an edge in \(G\) and \(h_{1}h_{2}\) is an edge in \(H\).
## 2 Packing vs. open packing in bipartite prisms
In [1] it was proved that if \(G\) is a bipartite graph, then \(\gamma_{t}(G\,\square\,K_{2})=2\gamma(G)\). In this section we prove an analogous result that connects the open packing number and the packing number.
We begin with the following simple lemma, which holds in all graphs.
**Lemma 2.1**.: _If \(G\) is a graph, then \(\rho^{\mathrm{o}}(G\,\square\,K_{2})\geq 2\rho_{2}(G)\)._
Proof.: Let \(G\) be a graph, and let \(P\) be a \(\rho_{2}\)-set of \(G\). Then \(P\times V(K_{2})\) is an open packing of \(G\,\square\,K_{2}\), hence the result.
In general, \(\rho^{\mathrm{o}}(G\,\square\,K_{2})\) can be arbitrary larger than \(2\rho_{2}(G)\). For an example consider the family of graphs \(G_{k}\), \(k\geq 1\), defined as follows. \(G_{k}\) contains \(2k\) disjoint cycles \(C_{5}\) connected in a row by an edge between two consecutive \(5\)-cycles. This informal definition of \(G_{k}\) should be clear from Fig. 1 where \(G_{2}\,\square\,K_{2}\) is drawn. As an arbitrary packing of \(G_{k}\) contains at most one vertex of each \(C_{5}\) we infer that \(\rho_{2}(G_{k})=2k\). On the other hand, repeating the pattern as shown in Fig. 1 for \(k=2\), we get \(\rho^{\mathrm{o}}(G_{k}\,\square\,K_{2})\geq 5k\).
For bipartite graphs, however, the above phenomena cannot occur as the main result of this section asserts.
Figure 1: An open packing in \(G_{2}\,\square\,K_{2}\)
**Theorem 2.2**.: _If \(G\) is a bipartite graph, then \(\rho^{\rm o}(G\,\square\,K_{2})=2\rho_{2}(G)\)._
Proof.: Let \(G\) be a bipartite graph with parts \(A\) and \(B\) forming the natural partition of \(V(G)\). By Lemma 2.1, we have \(\rho^{\rm o}(G\,\square\,K_{2})\geq 2\rho_{2}(G)\). To prove the reversed inequality, consider an open packing \(O\) in \(G\,\square\,K_{2}\) such that \(|O|=\rho^{\rm o}(G\,\square\,K_{2})\). We will show that \(O\) can be transformed into an open packing \(O^{\prime}\) of the form \(P^{\prime}\times V(K_{2})\), where \(P^{\prime}\) is a subset of \(V(G)\). (Clearly, the latter also implies that \(P^{\prime}\) is a 2-packing.) Note that \(O\) can be presented as the disjoint union \(I\cup R\), where \(I\) is the set of vertices that are isolated in the subgraph of \(G\,\square\,K_{2}\) induced by \(O\), while \(R\) is the set of vertices that have exactly one neighbor in \(O\). Clearly, at least one of the sets \(I\) or \(R\) is non-empty. Set \(V(K_{2})=\{1,2\}\), and let \(I_{i}=I\cap V(G^{i})\) and \(R_{i}=R\cap V(G^{i})\) for all \(i\in[2]\). In addition, let \(I_{i}^{A}=\{(u,i)\in I_{i}:\,u\in A\}\), \(I_{i}^{B}=\{(u,i)\in I_{i}:\,u\in B\}\) for \(i\in[2]\), and similarly let \(R_{i}^{A}=\{(u,i)\in R_{i}:\,u\in A\}\), \(R_{i}^{B}=\{(u,i)\in R_{i}:\,u\in B\}\) for \(i\in[2]\). Next, we compare the two quantities \(|I_{1}^{A}|+|I_{2}^{B}|\) and \(|I_{2}^{A}|+|I_{1}^{B}|\). We may assume with no loss of generality that \(|I_{1}^{A}|+|I_{2}^{B}|\geq|I_{2}^{A}|+|I_{1}^{B}|\). Now, the announced transformation of \(O\) to \(O^{\prime}\) is defined as follows:
* if \((u,t)\in I_{1}^{A}\cup I_{2}^{B}\), then let \(\{u\}\times V(K_{2})\subseteq O^{\prime}\);
* if \((u,t)\in I_{2}^{A}\cup I_{1}^{B}\), then let \((\{u\}\times V(K_{2}))\cap O^{\prime}=\emptyset\);
* if \((u,1)\in R_{1}\) and \((u,2)\in R_{2}\), then let \(\{u\}\times V(K_{2})\subseteq O^{\prime}\);
* if \((u,1)\in R_{1}^{A}\) and \((v,1)\in R_{1}^{B}\), where \(uv\in E(G)\), then let \(\{u\}\times V(K_{2})\subseteq O^{\prime}\) and \((\{v\}\times V(K_{2}))\cap O^{\prime}=\emptyset\);
* if \((u,2)\in R_{2}^{A}\) and \((v,2)\in R_{2}^{B}\), where \(uv\in E(G)\), then let \(\{v\}\times V(K_{2})\subseteq O^{\prime}\) and \((\{u\}\times V(K_{2}))\cap O^{\prime}=\emptyset\).
We claim that \(|O^{\prime}|\geq|O|\). Indeed, the first two rows in the above transformation show that for every vertex \((u,t)\in I_{1}^{A}\cup I_{2}^{B}\) we get two vertices in \(O^{\prime}\), while for every vertex \((u,t)\in I_{2}^{A}\cup I_{1}^{B}\) we get no vertices in \(O^{\prime}\), yet \(|I_{1}^{A}\cup I_{2}^{B}|>|I_{2}^{A}\cup I_{1}^{B}|\) by the earlier assumption. By the last three rows of the above transformation, every pair of vertices in \(R\) is replaced by two vertices in \(O^{\prime}\). This altogether implies that \(|O^{\prime}|\geq|O|\), so it remains to prove that \(O^{\prime}\) is an open packing in \(G\,\square\,K_{2}\).
If \((u,1)\in I_{1}^{A}\) and \((v,1)\in I_{1}^{A}\), then \(d_{G}(u,v)\geq 4\), because the vertices belong to \(O\), which is an open packing, and \(u\) and \(v\) are both in \(A\). Thus vertices in \(\{u\}\times V(K_{2})\) will be at distance at least 4 from the vertices in \(\{v\}\times V(K_{2})\). By symmetry, we get the same conclusion for vertices \((u,2)\in I_{2}^{B}\) and \((v,2)\in I_{2}^{B}\). If \((u,1)\in I_{1}^{A}\) and \((v,2)\in I_{2}^{B}\), then \(d_{G}(u,v)\geq 3\), because \(u\) and \(v\) belong to different parts, \(A\) and \(B\) respectively, of the bipartition of \(V(G)\) and they belong to \(O\), which is an open packing. Thus, vertices in \(\{u\}\times V(K_{2})\) will be at distance at least 3 from the vertices
in \(\{v\}\times V(K_{2})\), as desired. Clearly, if \((u,t)\in I_{1}^{A}\cup I_{2}^{B}\), then \(d_{G}(u,v)\geq 3\) for any \(v\in V(G)\) such that \(\{(v,1),(v,2)\}\subset R\). This yields that vertices in \(\{u\}\times V(K_{2})\) will be at distance at least \(3\) from the vertices in \(\{v\}\times V(K_{2})\). If \((u,1)\in I_{1}^{A}\) and \((v,1)\in R_{1}^{A}\), we have \(d_{G}(u,v)\geq 4\). On the other hand, if \((u,1)\in I_{1}^{A}\) and \((v,2)\in R_{2}^{B}\) we have \(d_{G}(u,v)\geq 3\). In either case, the corresponding vertices in \(O^{\prime}\) are at least three apart. By symmetry, we can find that for vertices in \(I_{2}^{B}\) and vertices in \(R_{1}^{A}\cup R_{2}^{B}\) their distances are sufficiently large so that the corresponding \(K_{2}\)-fibers that are in \(O^{\prime}\) will be at distance at least \(3\). This completes the proof that the distance between the vertices in \(O^{\prime}\) that appear in the first row of the above transformation to all other vertices in \(O^{\prime}\) will be at least \(3\), except of course for two vertices in \(O^{\prime}\) that belong to the same \(K_{2}\)-fiber and are adjacent.
Vertices of \(O^{\prime}\) that appear in the third row of the transformation remain at distance at least \(3\) from all other vertices in \(O^{\prime}\) (with the clear exception of two adjacent such vertices). Therefore, it remains to consider the vertices in \(O^{\prime}\) that appear in the last two rows of the above transformation. Suppose there are two vertices in \(R_{1}^{A}\) (and a similar argument can be applied if they are in \(R_{2}^{B}\)), say, \((u,1)\) and \((v,1)\), which are not adjacent. Then \(d_{G}(u,v)\geq 4\), and so \(\{u\}\times V(K_{2})\) will be at distance at least \(4\) from the vertices in \(\{v\}\times V(K_{2})\) (by symmetry, the same conclusion applies if \((u,2)\) and \((v,2)\) are in \(R_{2}^{B}\)). Finally, let \((u,1)\in R_{1}^{A}\) and \((v,2)\in R_{2}^{B}\). Since \(O\) is an open packing, we have \(d_{G}(u,v)>1\), and since they are in different parts of the bipartition, we get \(d_{G}(u,v)\geq 3\). We derive that \(\{u\}\times V(K_{2})\) will be at distance at least \(3\) from the vertices in \(\{v\}\times V(K_{2})\), which concludes the proof that \(O^{\prime}\) is an open packing. Since \(|O|=\rho^{\rm o}(G\,\square\,K_{2})\) and \(|O^{\prime}|\geq|O|\), we derive \(|O^{\prime}|=|O|=\rho^{\rm o}(G\,\square\,K_{2})\). In addition, there exists a set \(P^{\prime}\subset V(G)\) such that \(O^{\prime}=P^{\prime}\times[2]\), where \(P^{\prime}\) is a \(2\)-packing of \(G\). Hence, \(|P^{\prime}|\leq\rho_{2}(G)\), and so \(|O^{\prime}|=2|P^{\prime}|\leq 2\rho_{2}(G)\), implying \(\rho^{\rm o}(G\,\square\,K_{2})\leq 2\rho_{2}(G)\).
## 3 (Open) packings in hypercubes
The following lemma follows by observing that the restriction of a \(2\)-packing in \(G\,\square\,K_{2}\) to a \(G\)-layer is a \(2\)-packing of that layer.
**Lemma 3.1**.: _If \(G\) is a graph, then \(\rho_{2}(G\,\square\,K_{2})\leq 2\rho_{2}(G)\)._
We can now bound \(\rho_{2}\) and \(\rho^{\rm o}\) of hypercubes as follows.
**Theorem 3.2**.: _If \(n\geq 2\), then_
1. \(\rho_{2}(Q_{n})\geq 2^{n-\lfloor\log n\rfloor-1}\quad\text{and}\)__
2. \(\rho^{\rm o}(Q_{n})\geq 2^{n-\lfloor\log(n-1)\rfloor-1}\)_._
Proof.: (i) Suppose first that \(n=2^{k}-1\), where \(k\geq 2\). As already mentioned, in these cases \(Q_{n}\) admits a \(1\)-perfect code, say \(S\). Then \(|S|=2^{2^{k}-1}/2^{k}=2^{2^{k}-k-1}\) and consequently
\[\rho_{2}(Q_{n})=|S|=2^{2^{k}-k-1}=2^{2^{k}-1-(k-1)-1}=2^{n-\lfloor\log n\rfloor- 1}\,.\]
Consider now the hypercubes \(Q_{n}\), where \(k\geq 3\) and \(2^{k-1}-1<n<2^{k}-1\). In particular, if \(n=2^{k}-2\), then since \(Q_{2^{k}-1}=Q_{2^{k}-2}\,\square\,K_{2}\), Lemma 3.1 implies that
\[\rho_{2}(Q_{n})=\rho_{2}(Q_{2^{k}-2})\geq\frac{1}{2}\rho_{2}(Q_{2^{k}-1})=2^{2 ^{k}-k-2}=2^{2^{k}-2-(k-1)-1}=2^{n-\lfloor\log n\rfloor-1}\,.\]
Inductively applying the lemma, the result holds for all \(n\) such that \(2^{k-1}-1<n<2^{k}-1\). Therefore, (i) holds for all \(n\geq 2\).
(ii) Applying Theorem 2.2 and (i), we have
\[\rho^{\rm o}(Q_{n})=2\rho_{2}(Q_{n-1})\geq 2\cdot 2^{(n-1)-\lfloor\log(n-1) \rfloor-1}=2^{n-\lfloor\log(n-1)\rfloor-1}\]
for all \(n\geq 2\) and we are done.
If \(n\leq 7\), then equality holds in Theorem 3.2(i). The cases when \(n\in\{2,3,4\}\) can be easily argued by case analysis. The equality in cases when \(n\in\{5,6\}\) then follow by combining Lemma 3.1 and Theorem 3.2(i). For \(n=7\), the equality holds because \(Q_{7}\) has a \(1\)-perfect code. One is thus tempted to conjecture that the lower bound in Theorem 3.2(i) holds for all \(n\). However, with the help of a computer, we found the set
\[T= \{00000000,00001110,00110010,00111100,01010110,01011000,\] \[01100100,01101001,0111111,100100,10100101,10101011,\] \[11000111,11001100,11011011,11100010,11110001\}\]
which is a \(2\)-packing in \(Q_{8}\) with \(|T|=17\), hence \(\rho_{2}(Q_{8})\geq 17\). By Theorem 2.2, this in turn implies that \(\rho^{\rm o}(Q_{9})\geq 34\). Hence also the lower bound in Theorem 3.2(ii) is not sharp in general. It is sharp however for all \(n\leq 8\) because the lower bound in Theorem 3.2(i) is sharp for \(n\leq 7\) and because of Theorem 2.2. Furthermore, by using Theorem 2.2 and the fact that the lower bound in Theorem 3.2(i) is sharp when \(n=2^{k}-1\), it follows that the lower bound in Theorem 3.2(ii) is sharp for each value of \(n\) that is a power of \(2\).
Application to injective colorings
An _injective coloring_ of a graph \(G\) is a partition of the vertex set of \(G\) into open packings. The _injective chromatic number_, \(\chi_{i}(G)\), of \(G\) is the minimum cardinality of an injective coloring in \(G\). The concept was introduced by Hahn, Kratochvil, Siran and Sotteau [6] back in 2002, and has been considered by a number of authors, cf. [2, 4]. In the recent paper [3], graphs that admit special types of injective colorings were considered: a graph \(G\) is a _perfect injectively colorable graph_ if it has an injective coloring in which every color class forms a \(\rho^{\mathrm{o}}\)-set of \(G\). The authors of [3] considered hypercubes that are perfect injectively colorable. They noticed that such are the hypercubes \(Q_{n}\), where \(n\in[5]\), and proved that for all \(k\in\mathbb{N}\), the hypercube \(Q_{2^{k}}\) is a perfect injectively colorable graph. Apart from the mentioned cases, it was asked in [3, Problem 1] in which other dimensions the hypercube is perfect injectively colorable. Since an answer to the question is closely related to computing the value of the open packing number of hypercubes, it was also asked in [3, Problem 2] what is the value of \(\rho^{\mathrm{o}}(Q_{n})\) for \(n\geq 6\).
In this note, we give some partial answers to the above two questions. One
Figure 2: Partition of \(V(Q_{6})\) into (maximum) 2-packings of \(Q_{6}\).
can easily find that \(\rho_{2}(Q_{5})=4\), which by Theorem 2.2 implies that \(\rho^{\rm o}(Q_{6})=8\). In addition, Fig. 2 shows a maximum 2-packing of \(Q_{6}\) of cardinality 8, where vertices of an arbitrary color in [8] form a maximum 2-packing. This gives, again by Theorem 2.2, that \(\rho^{\rm o}(Q_{7})=16\). In addition, recall that \(\rho^{\rm o}(Q_{8})=32\), which follows from the fact that \(Q_{7}\) has a perfect code. Now, by the observation from Section 3, we have \(\rho_{2}(Q_{8})\geq 17\). On the other hand, we claim that \(\rho_{2}(Q_{8})\leq 30\). Suppose to the contrary that \(\rho_{2}(Q_{8})>30\), and let \(P\) be a \(\rho_{2}\)-set of \(Q_{8}\). Then, partitioning \(V(Q_{8})\) into \(Q\) and \(Q^{\prime}\), each of which induces \(Q_{7}\), we infer that either \(|Q\cap P|\) or \(|Q^{\prime}\cap P|\) is equal to 16. We may assume that \(|Q\cap P|=16\), and noting that \(Q\cap P\) is a 2-packing of \(Q_{7}\), this implies that \(Q\cap P\) corresponds to a perfect code of \(Q_{7}\), thus \(Q\cap P\) is a dominating set of \(Q\). This in turn implies that every vertex in \(Q^{\prime}\) is at distance at most 2 from a vertex in \(Q\cap P\), which yields that \(P=Q\cap P\), and so \(|P|=16\), a contradiction proving that \(\rho_{2}(Q_{8})\leq 30\). Now, using Theorem 2.2, we get \(34\leq\rho^{\rm o}(Q_{9})\leq 60\). In particular, \(\rho^{\rm o}(Q_{9})\) is not a power of 2, which readily implies that \(Q_{9}\) does not admit a partition into \(\rho^{\rm o}\)-sets, and is consequently not a perfect injectively colorable graph. On the other hand, refer to Fig. 2 again, which shows a coloring of \(Q_{6}\) in which each color class is a 2-packing of cardinality \(\rho_{2}(Q_{6})\). By applying Theorem 2.2 and the first part of its proof, one can construct an injective coloring of \(Q_{7}\) in which each color class is a open packing of cardinality \(\rho^{\rm o}(Q_{7})\). Therefore, \(Q_{7}\) is perfect injectively colorable graph.
Summarizing the above, hypercubes \(Q_{n}\), where \(n\leq 8\), are perfect injectively colorable graphs, and so \(Q_{9}\) is the first instance of a hypercube, which is not in this class of graphs.
## 5 Concluding remarks
Table 5 presents values or bounds on the main domination and packing invariants in hypercubes \(Q_{n}\), for all \(n,n\leq 9\). The values for \(\gamma\) and \(\gamma_{t}\) have been known earlier, while some of the values and bounds for \(\rho_{2}\) and \(\rho^{\rm o}\) have been obtained in this paper.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \hline \(\gamma\) & 1 & 2 & 2 & 4 & 7 & 12 & 16 & 32 & 62 \\ \hline \(\gamma_{t}\) & 2 & 2 & 4 & 4 & 8 & 14 & 24 & 32 & 64 \\ \hline \(\rho_{2}\) & 1 & 1 & 2 & 2 & 4 & 8 & 16 & 17-30 &? \\ \hline \(\rho^{\rm o}\) & 2 & 2 & 2 & 4 & 4 & 8 & 16 & 32 & 34-60 \\ \hline \end{tabular}
\end{table}
Table 1: Packing and domination invariants in hypercubes \(Q_{n}\), where \(n<10\).
In addition, consider the value \(\gamma_{t}(Q_{2^{k}})=2^{2^{k}-k}\), which follows from Theorem 1.1 combined with the formula \(\gamma_{t}(G\,\square\,K_{2})=2\gamma(G)\) from [1]. Now, compare this with the bound \(\rho^{\rm o}(Q_{2^{k}})\geq 2^{2^{k}-k}\), which follows from Theorem 3.2(ii) when plugging \(n=2^{k}\). Since \(\gamma_{t}(G)\geq\rho^{\rm o}(G)\) for every graph \(G\) with no isolated vertices, we infer that
\[\gamma_{t}(Q_{2^{k}})=2^{2^{k}-k}=\rho^{\rm o}(Q_{2^{k}}),\mbox{ for all }k\in \mathbb{N}. \tag{1}\]
Recall the result from [18] stating that \(\gamma_{t}(G\times H)=\gamma_{t}(G)\gamma_{t}(H)\) whenever \(G\) is a graph with \(\rho^{\rm o}(G)=\gamma_{t}(G)\) and graphs \(G\) and \(H\) have no isolated vertices. Therefore, from the discussion above we get that
\[\gamma_{t}(Q_{2^{k}}\times H)=2^{2^{k}-k}\gamma_{t}(H)\,,\]
where \(k\in\mathbb{N}\) and \(H\) is an arbitrary graph with no isolated vertices. An additional family of graphs with this property (that \(\gamma_{t}=\rho^{\rm o}\)) can be found in [12]. It would be interesting to establish if there are any hypercubes \(Q_{n}\) of other dimensions than those in (1) that satisfy the equality \(\gamma_{t}(Q_{n})=\rho^{\rm o}(Q_{n})\).
## Acknowledgments
This work was performed within the bilateral grant 'Domination in graphs, digraphs and their products" (BI-US/22-24-038). B.B. and S.K. were supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, N1-0285, and J1-3002.
|
2309.04977 | RGAT: A Deeper Look into Syntactic Dependency Information for
Coreference Resolution | Although syntactic information is beneficial for many NLP tasks, combining it
with contextual information between words to solve the coreference resolution
problem needs to be further explored. In this paper, we propose an end-to-end
parser that combines pre-trained BERT with a Syntactic Relation Graph Attention
Network (RGAT) to take a deeper look into the role of syntactic dependency
information for the coreference resolution task. In particular, the RGAT model
is first proposed, then used to understand the syntactic dependency graph and
learn better task-specific syntactic embeddings. An integrated architecture
incorporating BERT embeddings and syntactic embeddings is constructed to
generate blending representations for the downstream task. Our experiments on a
public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision
learning of the syntactic dependency graph and without fine-tuning the entire
BERT, we increased the F1-score of the previous best model (RGCN-with-BERT)
from 80.3% to 82.5%, compared to the F1-score by single BERT embeddings from
78.5% to 82.5%. Experimental results on another public dataset - OntoNotes 5.0
demonstrate that the performance of the model is also improved by incorporating
syntactic dependency information learned from RGAT. | Yuan Meng, Xuhao Pan, Jun Chang, Yue Wang | 2023-09-10T09:46:38Z | http://arxiv.org/abs/2309.04977v1 | # RGAT: A Deeper Look into Syntactic Dependency Information for Coreference Resolution
###### Abstract
Although syntactic information is beneficial for many NLP tasks, combining it with contextual information between words to solve the coreference resolution problem needs to be further explored. In this paper, we propose an end-to-end parser that combines pre-trained BERT [1] with a Syntactic Relation Graph Attention Network (RGAT) to take a deeper look into the role of syntactic dependency information for the coreference resolution task. In particular, the RGAT model is first proposed, then used to understand the syntactic dependency graph and learn better task-specific syntactic embeddings. An integrated architecture incorporating BERT embeddings and syntactic embeddings is constructed to generate blending representations for the downstream task. Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) [2] from 80.3% to 82.5%, compared to the F1-score by single BERT embeddings from 78.5% to 82.5%. Experimental results on another public dataset - OntoNotes 5.0 demonstrate that the performance of the model is also improved by incorporating syntactic dependency information learned from RGAT.
syntactic dependency information, syntactic embeddings, coreference resolution, Bert, blending embeddings
## I Introduction
Coreference resolution is the task of finding all linguistic expressions that refer to the same entity in the natural language. Ambiguous pronoun resolution, which attempts to resolve gendered ambiguous pronouns in English such as 'he' and'she', is a longstanding challenge in coreference resolution [2]. A Kaggle competition based on the task of gendered ambiguous pronouns (GAP) resolution was conducted in 2019 [3]. The effective use of Bidirectional Encoder Representations from Transformers or BERT [1] in this competition has shown significant improvement over traditional approaches. Unlike the traditional unidirectional language model, BERT is designed to pre-train deep bidirectional representations using a new masked language model (MLM), which enables the generation of deep bidirectional contextual embeddings.
At present, there are two BERT-based approaches for applying these contextual embeddings to ambiguous pronoun resolution tasks: the feature-based approach using BERT and fine-tuning BERT approach. The feature-based approach using BERT treats contextual representations derived from BERT as extra input features, which are combined in a task-specific model architecture without fine-tuning any parameters of BERT to obtain the coreference resolution for the target pronoun. For example, a model architecture combining BERT and SVM proposed in [4] obtains the correct mention for the target pronoun by applying the contextual embeddings from BERT to an SVM classifier. As for fine-tuning BERT approach, it uses BERT to model the downstream gendered pronoun reference task by plugging in the task-specific inputs and outputs into BERT and fine-tuning all the parameters end-to-end. Compared to the feature-based approach using BERT, fine-tuning BERT approach obtains more impressive performance without considering the computational cost, such as single fine-tuned BERT [5] or ensemble learning from multiple fine-tuned base BERT models [6].
However, fine-tuning the entire BERT model for a specific task is very computationally expensive and time-consuming because all parameters are jointly fine-tuned on the downstream task and need to be saved in a separate copy. For this reason, there are two improving strategies in BERT-based approach for the gendered pronoun reference task. One strategy focuses on the output representation of each layer in BERT by altering the BERT structure slightly at each layer and adding some extra parameters to change the output of each layer. Compared to fine-tuning all the parameters of BERT, this strategy can obtain a better result with less computation time, like Adapter [7], LoRA [8] and so on. Another strategy is to explore better blending embeddings than BERT on the coreference task with the help of syntactic parsing information. Syntactic parsing information is a strong tool in many NLP tasks, such as entity extraction or relation extraction. It has also been verified that blending embeddings from BERT representations and syntactic embeddings outperform the original BERT contextual representations in the gendered pronoun reference task [2]. Since the strategy of exploring blending embeddings has a computational advantage in running many experiments with cheaper models on a pre-compute representation of BERT, it is worthwhile for us to explore again the value of blending embeddings incorporating syntactic dependency information in ambiguous pronoun resolution tasks.
Recently, Cen et al. [9] have proposed the GATNE model,
which is a large-scale heterogeneous graph representations learning model to effectively aggregate neighbors of different edge types to the current node by assigning an attention mechanism. As far as we know, there has been no study has attempted to use GATNE or its variants to digest heterogeneous graph structures from the syntactic dependency graph.
Inspired by the GATNE model, we propose our Syntactic Relation Graph Attention Network model to make it suitable to generate heterogeneous syntactic embeddings for each sample data, namely RGAT. Based on that, we propose an end-to-end solution by combining pre-trained BERT with RGAT. Experiment results on the public GAP (Gendered Ambiguous Pronouns) dataset released by Google AI demonstrate that the blending embeddings which combine BERT representations and syntactic dependency graph representations outperform the original BERT-only embeddings on the pronoun resolution task, which significantly improves the baseline F1-score from 78.5% to 82.5% without fine-tuning BERT and expensive computing resource. Furtherly, to verify the effectiveness of our RGAT model for digesting syntactic dependency information in coreference resolution tasks, we also conduct another coreference resolution experiment on the public NLP dataset-OntoNotes 5.0 dataset. The experiment results demonstrate that after the syntactic embeddings learned with our RGAT model are incorporated with the benchmark model, the F1-score improves from 76.9% to 77.7%. All our experiment codes in this paper are available at [https://github.com/qingtian5/RGAT_with_BERT](https://github.com/qingtian5/RGAT_with_BERT). Our main contributions are shown below:
* Our work is the first deep attempt at using heterogeneous graph representations learning with attention mechanism on syntactic dependency graph for pronoun resolution task. The syntactic embeddings derived from our RGAT model successfully boost the performance of BERT-only embeddings. This provides a new idea to further digest syntactic dependency information for reference resolution tasks.
* Our work is the first to use graph attention mechanism to learn small syntactic dependency graph embeddings without expensive computation cost to solve the coreference resolution task. The supplementary experiment result on the public GAP dataset and OntoNotes 5.0 dataset shows that our adjusted model RGAT has a better generalization ability in NLP coreference resolution tasks.
* Our work is the first to largely boost the performance of the ambiguous pronoun resolution task with the help of syntactic dependency information. Most previous research considers that the effect of syntactic embeddings is weak, but our work significantly improves the F1-score of the BERT + fc baseline model from 78.5% to 82.5% on the GAP dataset.
## II Preliminary Wrok
### _BERT-Based Embeddings_
BERT makes use of Transformer, an attention mechanism that learns contextual relations between words in a text. When training language models, BERT uses two training strategies: Masked LM (MLM) and Next Sentence Prediction (NSP). Through these two tasks, what we need to do is how to apply BERT model to our samples to get the embedded representations.
For our ambiguous pronoun resolution task, each of our samples is taken as a long sentence, and then a [cls] token is added before the sentence. Through a pre-trained BERT model, the embedded representation of each token in the sentence is obtained. In fact, our goal is to obtain the relations between pronouns and nouns, so we only need to extract the embedded representations of the tokens related to pronouns (P) and nouns (A, B), then concatenate them, and finally get the results about the specific reference of the pronouns (P) through the fully connected layer, which are shown in Fig. 1.
Take a sample sentence for example- "Bill(A) said Alice(B) would arrive soon, and she(P) did", our task is to find out whether "she(P)" refers to "Bill(A)" or "Alice(B)". As the information flows in Fig. 1, we first break the whole sentence into words and make them the input of the BERT model. The pre-trained BERT model will generate an embedding for each word. Because in our coreference resolution task, there are only three possible results: (1) P refers to A; (2) P refers to B; (3) P refers to neither A nor B. Therefore, we regard our task as a tripartite classification problem. Thus, we extract the embedding of P, A and B from the BERT outputs, then concatenate them, and last through a fully connected layer.
### _Syntactic Dependency Information Learning_
Although syntactic parsing information is beneficial to pronoun coreference resolution, how to extract syntactic embeddings and incorporate them with BERT embeddings for the coreference task is difficult. A common way of digesting syntactic parsing information is to utilize the syntactic dependency relations between words in a text, which can be easily represented as nodes and edges in a graph structure. Then graph based model can be used to learn the syntactic dependency information for the subsequent task. For the coreference resolution task, each sentence is parsed into a syntactic dependency graph, which contains three types of edges. Thus, the traditional Graph Convolutional Network (GCN) cannot
Fig. 1: BERT-Based Embedding for our coreference resolution task.
handle this multi-relation graph. Xu [2] innovatively incorporated syntactic embeddings, which is digested with Gated Relational Graph Convolutional Network (Gated RGCN [10]) with BERT embeddings for their pronoun coreference task. Specifically, RGCN is used to aggregate three heterogeneous graph structures between the head world and the dependency word to obtain word syntactic embeddings [2]. The idea provided by RGCN is that the information should be treated differently for different edge types, denoted as follows:
\[h_{i}^{(l+1)}=\mathrm{ReLU}\left(\sum_{r\in R}\sum_{u\in N_{r}(v_{i})}\frac{1} {c_{i,r}}W_{r}^{(l)}h_{u}^{(l)}\right) \tag{1}\]
where \(N_{r}\left(v_{i}\right)\) and \(W_{r}^{(l)}\) denote the set of neighbors of node i and weight under relation \(r\in R\) respectively. It can be seen from here that although RGCN is used to solve multilateral types, it does not consider the problem of edge features, and the default is that only type feature exists for each edge.
In contrast to using pre-trained BERT embeddings and fully-connected layers for prediction, the series connection architecture of pre-trained BERT with RGCN from Xu [2] increases the F1-score by 1.8%. However, RGCN does not perform very well in digesting the weight information between multiple edges graph structures from the syntactic dependency graph. Meanwhile, Xu's result is far less accurate than fine-tuning the entire BERT. The main problem may exist in two aspects. On the one hand, according to the RGCN model, if there are multiple different types of edges in the network, it will eventually need to generate a linear layer for each type of edge. This will lead to a linear increase in the number of model parameters. On the other hand, for some types of edge in syntactic dependency graph, the frequency of occurrence may be small, which will lead to the linear layer corresponding to this type of edge is eventually updated on only a few nodes, resulting in the problem of overfitting a small number of nodes.
Inspired by RGCN with BERT and the development of graph neural networks, we believe that the performance of syntactic parsing information on pronoun resolution can be further improved. The first reason is that syntactic information always plays a very important role in the extraction of hand-crafted features, especially in most heuristics-based methods for the pronoun resolution task [11][12][13]. The second reason is that many newly graph learning-based models incorporating syntactic information achieve improving results in entity extraction [14] or semantic role labelling tasks [15]. In order to solve the problems of RGCN, we will illustrate how to learn the syntactic dependency graph using our proposed RGAT model in the next section. And in section IV, we propose to use L2 regularization for parameters in the RGAT model to alleviate the problem of overfitting.
## III Method
### _Syntactic Dependency Graph_
Since a dependency parse describes the syntactic relations that hold among words, many existing researches transform the dependency parse tree into the syntactic dependency graph to capture the syntactic features [2]. It is commonly assumed that there are three kinds of information flows in the syntactic dependency graph: from heads to dependents, from dependents to heads and self-loops, which are shown in Fig. 2.
For each node in the syntactic dependency graph in Fig. 2, it is linked with three different types of edges, corresponding to three different types of syntactic relations. Since we are focused on embedding learning in the syntactic dependency graph, it is important to be able to draw on strengths from such different relations and learn uniform syntactic embeddings.
### _RGAT Model_
The core idea of GATNE-T (we refer to this as GATNE in the paper) is to aggregate neighbors of different edge types to the current node and then generate different vector representations for nodes of each edge type. Inspired by the GATNE algorithm proposed by Cen [9], we adjust GATNE and propose the RGAT model applied in the syntactic dependency graph with multiple edges to learn uniform embeddings.
Generally, the RGAT model has been modified in three aspects based on the GATNE model. First, GATNE obtains the embedded representations of nodes and edges on a large graph structure data, but our RGAT model needs to adapt to different syntactic graph structures generated by different samples. Therefore, a new attention architecture is proposed to solve this problem. Second, the initial embedding representations of GATNE are randomly generated, but our goal is to solve the ambiguous pronouns coreference resolution, thus it is natural to use BERT Embeddings to initialize the node embeddings. Third, in order to take advantage of all the information, we add a shortcut module in the model so that our initialization node embeddings can also be concatenated into the final output embeddings.
Specially, the RGAT model splits the overall embedding of a certain node v on each edge type i in the syntactic dependency graph into two parts: base embedding and three edge embeddings. The base embedding of node v is shared between its three different edge types. We use BERT Embedding as the base embedding of the nodes. We follow the following aggregation steps to obtain the final syntactic embeddings of each node.
Fig. 2: Information Flows in Syntactic Dependency Graph.
First, the representation of each node is compressed to obtain a more compact representation denoted as \(u_{i}^{\text{base}}\), which is used as the base embedding.
\[u_{i}^{\text{base}}\ =W_{r0}u_{i}^{\text{out}} \tag{2}\]
where \(W_{r0}\in R^{1024*256}\) is learnable, and \(u_{i}^{\text{out}}\) is the BERT representation of node \(v\) on edge type \(i\). In our work, the representation of each node \(v\) from BERT is a vector of 1024 dimensions. Consistent with previous work [2], the compressed node representation dimensions are set to 256. So \(u_{i}^{\text{base}}\) is a vector of 256 dimensions.
Second, following GraphSage [16], we obtain each type of edge embedding for node \(v\) by aggregating from neighbors' edge embeddings. We randomly sample \(n\) neighbor nodes for each edge embedding \(u_{j,r}^{\text{base}}\), and then aggregate them. The aggregator function can be a sum, mean or max-pooling aggregator as follows:
\[U_{i,r}=W_{r1}\text{aggregator}\left(\left\{u_{j,r}^{\text{base}},\forall u_{j} \in N_{i,r},j=0,1,2,\dots,n\right\}\right) \tag{3}\]
where \(W_{r1}\in R^{d\omega}\), is a learnable parameter, \(d\) is 256, \(m\) is the hyperparameter that needs to be given. In order to make the attention calculation more convenient, we compress the aggregated representation again.
Third, applying the attention mechanism to get the weight of each aggregated edge representation for each node as follows:
\[a_{i,r}=\operatorname{softmax}\left(w_{r}^{T}\tanh\left(W_{r2}U_{i,r}\right) \right)^{T} \tag{4}\]
where \(W_{r}\in R^{n}\), \(W_{r2}\in R^{m*n}\) are learnable parameters, \(n\) is the hyperparameter that needs to be given.
Fourth, combining each weighted aggregated representation with the base embedding, the final representation of each node in edge type \(r\) can be represented as \(v_{i,r}\), which is denoted as follows:
\[v_{i,r}=u_{i}^{\text{base}}+a_{i,r}M_{r}U_{i,r} \tag{5}\]
where \(M_{r}\in R^{d\omega}\) is a learnable parameter, \(u_{i}^{\text{base}}\) is base embedding and \(v_{i,r}\) is a vector with 256 dimensions.
Finally, the syntactic embedding representation of each node is aggregated from three kinds of node representation \(v_{i,0}\),\(v_{i,1}\),\(v_{i,2}\), denoted as follows:
\[v_{i}=\text{aggregator}\left(v_{i,0},v_{i,1},v_{i,2}\right) \tag{6}\]
where the aggregator function can be a sum, mean or concatenate operation. The influence of different aggregators will be mentioned in detail in the ablation experiment in Section IV.
### _Syntactic Embeddings_
The previous RGCN model [2] only uses the information of neighbor nodes, but it brings a significant improvement in coreference resolution tasks. Therefore, we think the potential of learning syntactic structure using RGAT can be much more than that. The most important reason is that by designing attention mechanisms, we obtain more valuable information from different types of edges.
Fig. 3 explains in detail how to extract Syntactic Embeddings from information flows in syntactic dependency graph and BERT Embeddings.
For one thing, we use BERT Embeddings as the base embeddings. For another thing, we use three different types of syntactic relations to construct the syntactic relation graph with attention information to learn RGAT embeddings. In order to retain information from BERT Embeddings, we concatenate the embeddings that represent the relation graph with attention information and BERT Embeddings. Then, we concatenate different embeddings from three different kinds of edges (different color represent embeddings from different edge types in Fig. 3). Finally, the significant words embeddings (the embedding of three words - A, B, P, like in section II, Fig. 1) are concatenated as Syntactic Embedding.
### _Connect BERT Embeddings and Syntactic Embeddings in Series_
We blend the syntactic embedding derived from the syntactic dependency graph with the pretrained BERT embeddings by connecting BERT embedding and syntactic embedding in series. This integrated architecture can help us learn better-performing embeddings when dealing with the task of pronoun resolution.
We first use the pre-trained BERT to extract context information between words and then connect with syntactic information from RGAT to form a "look again" mechanism to further obtain blending representations that are more beneficial to the current task. The specific architecture is shown in Fig. 4.
As shown in Fig. 4, the pre-trained BERT obtains the hidden feature representation, then RGAT looks at the syntactic information of the sentence again. Relying on the syntactic information derived from RGAT, we can obtain the hidden state of pronoun-related words (denoted as h1(A), h4(B), h6(P)) in the sentence. There is also a fully connected layer in parallel with the output of RGAT, which is used to get a more compact embedding representation for each pronoun-related word.
Finally, the outputs representation by RGAT are concatenated with the compact embedding representation of each pronoun-related word. The reason for concatenation is mainly because the syntactic dependency graph uses a special form of Laplace smoothing during its construction process [17], which
Fig. 3: Embedding Structure of Syntactic Dependency Graph.
may contain vertex-related features. Some original feature embeddings can be preserved by concatenation, and ultimately a fully connected layer is used for prediction.
## IV Experiments
### _Experimental Setup_
**GAP Dataset.** The first ACL workshop on Gender Bias in Natural Language Processing (2019) included a coreference task addressing Gendered Ambiguous Pronouns (GAP). The task is based on the coreference challenge defined by Webster [3][18] and aims to benchmark the ability to address pronouns in real-word contexts in a gender-equitable way. 263 teams competed through the Kaggle competition, and the winning system ran at a speed of 0.13667, which is close to gender parity. We reviewed the approaches and their documentation of the top eleven systems, noting that they effectively use BERT [1], both through fine-tuning, feature extraction, or ensembles. In order to compare with the baseline results of the previous work on the GAP task [2][3][18] our work directly uses the Gendered Ambiguous Pronouns (GAP) dataset, containing all 8908 samples. The dataset contains 8908 pairs of labelled samples from Wikipedia. Consistent with previous work, 4908 samples are used as training data and 4000 samples are used as test data.
**Evaluation metrics.** The task is a multi-classification problem, and we use micro F1-score as the evaluation metric, which is to calculate the precision and recall of all classes together, and then calculate the micro F1-score according to the following formula.
\[F1=2\times\frac{\text{precision }\times\text{recall}}{\text{precision }+\text{ recall}} \tag{7}\]
\[\text{precision}_{\text{micro}}=\frac{TP_{1}+TP_{2}+TP_{3}}{TP_{1}+FP_{1}+TP_ {2}+FP_{2}+TP_{3}+FP_{3}} \tag{8}\]
\[\text{recall}_{\text{micro}}=\frac{TP_{1}+TP_{2}+TP_{3}}{TP_{1}+FN_{1}+TP_{2} +FN_{2}+TP_{3}+FN_{3}} \tag{9}\]
\[\text{micro }F1-\text{score}=2\times\frac{\text{precision}_{\text{micro}} \times\text{recall}_{\text{micro}}}{\text{precision}_{\text{micro}}+\text{ recall}_{\text{micro}}} \tag{10}\]
**Implementation Detail.** We use the SpaCy module as a syntactic dependency analyzer in our work. For each sample, we build a syntactic dependency graph, then extract and save the needed information. Due to memory constraints, we do not put the entire syntactic dependency graph into the model for training, but first extract the features we needed from the graph, then combine them into a batch, and last sent them into the model. We use Adam [19] as the optimizer and adopt the form of warm up for the learning rate in our model. Especially, the L2 regularization result of the RGAT layer weight is added to the loss function, and the fully connected layer uses batch normalization and dropout strategy. In addition, for the number of sampling random neighbor nodes in the second step of the RGAT model, we found that each node has at most four different neighbors. Thus, in order to take into account the algorithm efficiency and the diversity of neighbor node calculations, we set the number of random samples to four. As for the aggregation method involved in the second step of GraphSage [16], we found that methods such as summation, mean, and maximum pooling basically have no different impact on the model performance. So, we simply use the aggregation method of summation. We use the "BERT-Large-Uncased" model to generate the BERT embedding representations we need. It should be noted that in our model, BERT has not been fine-tuned. The parameters of all BERT models are fixed during training. The advantage of this is that we do not need to save a separate copy of BERT-related model parameters for the GAP dataset.
In order to better improve the generalization of the model, we used the 5-fold cross validation method to split the training set into five equal parts. Each experiment takes one part for verification and the rest is used for training. Each time, the model parameters with the best performance on the validation set are applied to the test set to get the prediction result. A total of five prediction results are obtained, and the average value is taken as the final prediction result.
### _Ablation Studies_
In order to be consistent with baseline work, we use the same hyperparameter configuration as RGCN-with-BERT [2]. However, in our proposed RGAT model, it is necessary to
Fig. 4: The Blending Structure of BERT and Syntactic Embedding.
determine the dimension of the embeddings of the edge. To this end, we conduct experiments on different parameters and the results are shown in the TABLE I.
Through comparison, the dimension parameters (m, n) of the node type are set to (10.20). For the output features of three different edge types, we compare the F1-score of different aggregation methods such as averaging, summing and concatenation. Meanwhile, since the concatenation method will lead to an increase in the number of parameters, we adjust the feature dimension so that the parameters of three aggregation methods are relatively close. In the end, the mean and summing aggregation methods are found to obtain similar experiment results, while the concatenation aggregation method is found to obtain the best result.
### _Comparison with Other Methods_
The paper proposing the GAP dataset [3][18] introduced several baseline methods: (1) existing parsers including a rule-based system by Lee [20], as well as three neural parsers from Clark and Manning (2015) [21], Wiseman et al. (2016) [22] and Lee et al. (2017) [23]. (2) baselines based on traditional coreference cues; (3) baselines based on structural cues: syntactic distance and parallelism; (4) baselines based on Wikipedia cues; (5) Transformer models [24]. Among them, RGCN-with-BERT [2] further improves the F1-score of some baseline methods, reaching 80.3%.
We select the best three from the baseline models and the RGCN-with-BERT (Xu et al., 2019) [2] model to compare with our model. At the same time, we also compare our work with the BERT in series with a fully connected layer (BERT-fc). Experimental results show that our work achieves a large improvement over baseline models, which is shown in TABLE II.
baseline models, which is shown in TABLE III. (P means precision, R means recall, F1 means F1-score).
TABLE III shows that RGAT-with-BERT + c2f-coref outperforms the BERT-large + c2f-coref model on English by 0.8% on the OntoNotes 5.0 Dataset. The main evaluation metric is the average F1 of three metrics - \(MUC\), \(B^{3}\) and \(CEAF_{\varphi 4}\) on the test set. Given how gains on coreference resolution have been hard to come by as evidenced by baseline models in TABLE III, our model is still a considerable improvement. It is noted that compared with BERT, we only add a relatively small number of parameters, which can get a more obvious effect on the reference resolution task. Due to the limitation of computing resources, we did not tune high parameters further. In view of the experimental results, we believe that the syntactic structure can indeed help the model to further understand the coreference resolution task.
## V Conclusion and Discussion
### _Conclusions_
The experiment results show that with the help of sentence syntactic dependency information, using the output representations of BERT pre-trained model, RGAT can further learn embedding representations that are more conducive to the task of pronoun resolution and improve the performance of this task.
"Gender Bias in Natural Language Processing (GeBNLP) 2019 Shared Task" is a competition to build a common reference parsing system on the GAP dataset. We employ a combination of BERT and our proposed graph neural network RGAT model to participate in this task. The RGAT model is used to digest the syntactic dependency graph, and further extract syntactic-related information from the output features by BERT, which helps us improve the accuracy of this task. Our model significantly improves the F1-score of the task from the previous best of 80.3% to 82.5% (an improvement of 2.2%). Compared with BERT plus a fully connected layer, the accuracy of fine-tuning the fully connected layer is only 78.5%, and our model has a 4.0% improvement. The results show that without fine-tuning the BERT model, the syntactic dependency graph can significantly improve the performance of the referencing problem.
For another classic dataset - OntoNotes 5.0, the comparison results of RGAT-with-BERT +c2f-coref VS. baseline BERT-large + c2f-coref indicates that syntactic structure might better encode longer contexts. These observations suggest that future research in pretraining methods may look at more effectively encoding document-level context including syntactic structure. Modelling pronouns, especially in the context of dialogue, still has a lot of potential for our model. Although our advantages are not very obvious, partly because we are limited to the c2f-coref architecture, we believe syntactic structure can effectively help our model achieve more comparable results for document modelling if we can design a suitable architecture for our model. Lastly, through the overview of training samples and our model outputs, a considerable number of errors suggest that our model is also still unable to resolve cases requiring mention paraphrasing like [27], especially considering that such learning signal is rather sparse in the training set. However, a few of these errors have been reduced. We think our model has the possibility to solve this problem.
Fig. 5: RGAT-with-BERT + c2f-coref model for OntoNotes 5.0 dataset.
### _Discussion_
In fact, our work provides an alternative paradigm for such similar coreference tasks or for those tasks that need to mine the role of syntactic information. Our work demonstrates it is not so necessary to fine-tune the entire BERT model and save a unique BERT model parameter for each task. Our proposed paradigm simply changes the classification header of the BERT model to graph neural networks with syntactic dependencies, and then fine-tunes the new classification header to obtain better results.
Our work is to solve the problems encountered by the current large pre-trained language model from a new perspective, that is, the pre-trained language model is too large, and it is necessary to save a new fine-tuned model parameter for each downstream task. For example, models such as LoRA [8], AdapterBias [30], and LLM-Adapters [31] all reduce the amount of parameter required for fine-tuning the model on the model itself and the output of each layer. And other models that incorporate traditional machine learning methods [4][13] do not achieve competitive results. By comparison, our work is to use the output features of the pre-trained language model, but not to change the parameters of the model itself and the output features. Our work shows that changing the classification head can effectively reduce the amount of parameter for fine-tuning the pre-trained model and greatly improve the recognition accuracy of the task.
### _Limitation and Future Direction_
Our models and experiments have shown that syntactic dependency information plays a significant role in reference resolution tasks and that syntactic structure can optimize the embedded representation of large language models. However, there are three problems to solve in the future. (1) There is no evidence of the role of syntactic structures for other NLP tasks. (2) In this paper, supervised learning is used to optimize the embedded representations of BERT. It is a future direction to explore the unsupervised representation learning that combines BERT with syntactic structure. (3) When training, we can first save the features to the hard disk, but in inference, we do not store embedded features, so how to optimize the inference time is also a future problem.
## Acknowledgment
This research was funded by Shanghai Philosophy and Social Sciences Planning Project, grant number 2020BGL009.
|
2304.04753 | $\textit{e-Uber}$: A Crowdsourcing Platform for Electric Vehicle-based
Ride- and Energy-sharing | The sharing-economy-based business model has recently seen success in the
transportation and accommodation sectors with companies like Uber and Airbnb.
There is growing interest in applying this model to energy systems, with
modalities like peer-to-peer (P2P) Energy Trading, Electric Vehicles (EV)-based
Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V), and
Battery Swapping Technology (BST). In this work, we exploit the increasing
diffusion of EVs to realize a crowdsourcing platform called e-Uber that jointly
enables ride-sharing and energy-sharing through V2G and BST. e-Uber exploits
spatial crowdsourcing, reinforcement learning, and reverse auction theory.
Specifically, the platform uses reinforcement learning to understand the
drivers' preferences towards different ride-sharing and energy-sharing tasks.
Based on these preferences, a personalized list is recommended to each driver
through CMAB-based Algorithm for task Recommendation System (CARS). Drivers bid
on their preferred tasks in their list in a reverse auction fashion. Then
e-Uber solves the task assignment optimization problem that minimizes cost and
guarantees V2G energy requirement. We prove that this problem is NP-hard and
introduce a bipartite matching-inspired heuristic, Bipartite Matching-based
Winner selection (BMW), that has polynomial time complexity. Results from
experiments using real data from NYC taxi trips and energy consumption show
that e-Uber performs close to the optimum and finds better solutions compared
to a state-of-the-art approach | Ashutosh Timilsina, Simone Silvestri | 2023-03-31T04:28:31Z | http://arxiv.org/abs/2304.04753v1 | # _e-Uber_: A Crowdsourcing Platform for Electric Vehicle-based Ride- and Energy-sharing
###### Abstract
The sharing-economy-based business model has recently seen success in the transportation and accommodation sectors with companies like Uber and Airhorb. There is growing interest in applying this model to energy systems, with modalities like peer-to-peer (P2P) Energy Trading, Electric Vehicles (EV)-based Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V), and Battery Swapping Technology (BST). In this work, we exploit the increasing diffusion of EVs to realize a crowdsourcing platform called _e-Uber_ that jointly enables ride-sharing and energy-sharing through V2G and BST. e-Uber exploits _spatial crowdsourcing_, reinforcement learning, and _reverse auction_ theory. Specifically, the platform uses reinforcement learning to understand the drivers' preferences towards different ride-sharing and energy-sharing tasks. Based on these preferences, a personalized list is recommended to each driver through _CMAB-based Algorithm for task Recommendation System (\(CARS\))_. Drivers bid on their preferred tasks in their list in a reverse auction fashion. Then e-Uber solves the task assignment optimization problem that minimizes cost and guarantees V2G energy requirement. We prove that this problem is NP-hard and introduce a bipartite matching-inspired heuristic, _Bipartite Matching-based Winner selection_ (\(BMW\)), that has polynomial time complexity. Results from experiments using real data from NYC taxi trips and energy consumption show that e-Uber performs close to the optimum and finds better solutions compared to a state-of-the-art approach.
Online spatial crowdsourcing, V2G, energy-sharing, ride-sharing, personalized recommendation, combinatorial multi-armed bandit.
## I Introduction
With the recent advent of sharing-economy-based models and their successful application in accommodation-sharing (e.g. Airhorb, Vrb0) and ride-sharing (e.g. Uber, Lyft), researchers have focused on applying this concept to energy systems [1, 2]. Energy-sharing modalities such as peer-to-peer (P2P) energy trading [3, 4], and Electric Vehicle (EV)-based Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V) [5], as well as Battery Swapping Technology (BST) [6] have been proposed as sustainable and flexible approaches to balance the energy supply and demand for both the grid and end-users [5, 7]. Especially, the rapid rise in EV sales in recent years has created new opportunities for mobile and flexible energy storage and management including ride-sharing and energy-sharing services using EVs [5]. However, no studies have been made so far to realize a platform that jointly enables both ride-sharing and energy-sharing.
Crowdsourcing is an approach for recruiting workers from a "crowd" to execute tasks that has been successfully applied to several domains [8, 9]. We believe that a crowdsourcing platform has the potential to also be successfully applied to the a combined ridesharing and energy-sharing system, where _tasks_ are ride- and energy-sharing requests that can be performed by EV drivers, called _workers_. Tasks are requested by _task-requesters_ which include ride-sharing clients as well as private or public energy customers. Examples of such energy customers include a utility company and a microgrid community looking to achieve demand response by shifting energy demand to V2G services at different locations, specially during the time of peak energy demands [10, 11, 12, 13].
In this work, we propose a novel crowdsourcing platform called _e-Uber_ that leverages the increasing diffusion of EVs to enable joint ride-sharing and energy-sharing services. A general overview of the platform is depicted in Fig. 1. With this platform, drivers equipped with EVs can not only transport passengers through ride-sharing but also sell excess energy stored in their batteries to the grid/houses during periods of high demand through V2G or battery swapping [14, 15, 16]. e-Uber has the potential to increase the earning potential for drivers and also to help balance the energy demand and supply
Fig. 1: e-Uber crowdsourcing platform overview
for the grid while simultaneously fulfilling the mobility and energy demands of consumers.
A few works on crowdsourcing have been proposed to facilitate the integration of energy-sharing services with EVs. Ai et al. [7] proposed a V2H-based omni-sharing modality in a microgrid community to crowdsource energy from EVs. Similarly, the authors in [17] propose an autonomous EV (AEV)-based energy crowdsourcing approach, allowing AEVs to participate in energy-sharing tasks for consumers placed in the cloudlet. However, these approaches do not consider the _workers' preferences_ as well as their _limited ability_ of selecting tasks when overwhelmed with choices and problems. There have been a few spatial crowdsourcing work attempting at solving the task assignment problem considering worker preferences [9, 18, 19, 20]. However, these approaches focus on general uniform tasks, and do not consider ride-sharing combined with energy-sharing.
To the best of our knowledge, in this paper we propose the first crowdsourcing mechanism that jointly enables ride- and energy-sharing to provide a multifaceted solution to existing problems on efficiency and sustainability of transportation, energy management, and cost-effective demand response using EVs. _e-Uber_ works in three decision stages: calculate a personalized task recommendation for each EV worker, collect bids from workers, reverse auction-based winning bids selection. We propose a preference-aware optimal task recommendation system, \(POTR\), and a reinforcement learning mechanism to learn worker preferences. The reverse auction process is formalized for bidding and the winning bids are determined through an optimization framework called _Winning Bid Selection_ (\(WiBS\)). A Reinforcement Learning (RL)-based algorithm, called \(CARS\) is proposed that solves the problem of task recommendation and updates the worker preferences based on their interaction with the recommendation using Combinatorial Multi-Armed Bandit framework [2]. Proving that \(WiBS\) problem is NP-hard, we also propose bipartite matching-based heuristic, \(BMW\) that finds solution to \(WiBS\) in polynomial time.
The major contributions of the paper are as follows:
* We propose a spatial crowdsourcing platform, _e-Uber_, to jointly enable ride-sharing and energy-sharing using EVs;
* We develop an optimization framework, called _POTR_, based on reinforcement learning for personalized recommendation of tasks to workers.
* We also formalize winning bid selection (_WiBS_) problem, and prove that it is NP-Hard;
* We propose an RL algorithm, called \(CARS\), that incorporates reinforcement learning for task recommendation to workers and update the preferences according to their interaction to the recommendation;
* Given the complexity of the \(WiBS\) problem, we propose a Bipartite Matching-based Winner Selection algorithm, \(BMW\) and determine its polynomial time complexity;
* Through extensive experiments using real data, we show that _e-Uber_ can indeed lead to successful joint crowdsourcing of energy and ride-sharing services that is able to complete more than 850 tasks compared to state-of-the-art approach in a span of 24 hours;
## II Related works
Crowdsourcing services has received increasing attention in recent years because of their flexibility and convenience in facilitating the completion of tasks by a set of workers [9]. There exists a plethora of research works that focus on different aspects of crowdsourcing from optimal task allocation [5] to preference-aware decision-making [18] to privacy-preserving [19, 21]. Some other focus on designing an effective and informed incentive mechanism that motivates workers for their sustained engagement in the system [19]. Reverse auction mechanism has been widely utilized for designing incentive mechanism including bidding and winner selection in crowdsourcing works [22, 20, 23, 24]. In [22], a secure reverse auction protocol is devised for task assignment for spatial crowdsourcing along with an approximation algorithm. Similarly, [23] proposes a truthful reverse auction mechanism for location-aware crowdsensing while authors in [20] focus on generalized second-price auction for stable task assignment. The work in [24] also uses a truthful reverse auction mechanism to devise incentives for workers in urban parcel delivery.
In context of electric vehicles (EV), the work in [5] employs crowdsourcing for solving charging problems of EVs. A V2V energy-sharing framework has been proposed that crowdsources the charging request from EV owners and allocates the energy considering energy trading prices, EV parameters and privacy. Some other crowdsourcing literature focus on different problems like route optimization of EVs [25] and parcel delivery using EVs [26]. Closer to our problem setting, some literature have explored the use of crowdsourcing for integrating energy-sharing services with EVs. For instance, authors in [7] proposed a V2H-based omni-sharing modality system in a microgrid community, where energy is crowdsourced from EVs to reduce the overall cost of the community and decrease the need for energy storage. Another study [17] suggested an autonomous EV-based energy crowdsourcing approach, which enables EVs to participate in energy-sharing tasks for cloud-based energy consumers. However, this approach is challenging to implement and doesn't consider workers' preferences or the impact of sub-optimal decision-making.
In fact, most of these crowdsourcing works ignore the user behavioral modeling in task assignment. The spatial crowdsourcing work in [18] tried to solve the task assignment problem by considering worker preferences, but this solution is better suited for group tasks and doesn't account for other behavioral aspects of user behavioral modeling like _bounded rationality_[27] and irrational decision-making that drastically affects the system performances. Additionally, the existing works neglect the task recommendation problem and other realistic budget constraints, such as the energy budgets required by the utility or microgrid for any time period. Furthermore, these works are limited to homogeneous tasks like energy-sharing or delivery services only, which can result
in significant idle hours for EVs during off-peak periods as such tasks have similar pattern.
In conclusion, while existing literature in crowdsourcing mechanisms have contributed to task assignment, incentive design, privacy and energy-sharing services, there is still room for improvement in terms of behavioral aspect like preference-aware task recommendation and online learning of these preferences; task assignment with overall cost minimization and energy budgets; and heterogeneity in crowdsourcing tasks. Our proposed work focuses on addressing these limitations and developing more comprehensive, effective, and realistic solution to joint enabling of ride-and energy-sharing services in a crowdsourcing setting using reverse auction, reinforcement learning and efficient matching algorithms.
## III System Model
We assume time to be divided in time slots. At each time slot \(t\), the set of tasks is referred to as \(\mathcal{S}_{t}\), which are crowdsourced to the workers. We refer to \(\mathcal{W}_{t}\) as the set of workers at time \(t\). Each task in \(\mathcal{S}_{t}\) is denoted by a tuple \(s_{j}\stackrel{{\text{def}}}{{=}}\langle z_{j},c_{j},d_{j}\rangle\) where \(z_{j}\) is the type of task (\(0-\)rideshare, \(1-\)battery swapping, and \(2-\)V2G), \(c_{s_{j}}\) is the start position and \(d_{j}\) is the destination of task. For energy-sharing tasks, although spatial in nature, start position \(c_{s_{j}}\) is same as destination \(d_{j}\). We assume the utility company submits energy tasks as a result of an _energy requirement_\(\mathcal{E}\). This is a typical assumption for demand response solutions [10, 11, 12]. As a result, the total amount of energy provided by workers through V2G must be at least \(\mathcal{E}\). Each worker in \(\mathcal{W}_{t}\) is denoted by a tuple \(w_{i}\stackrel{{\text{def}}}{{=}}\langle c_{w_{i}},e_{i},r_{i},r_{ i}^{min}\rangle\), where \(c_{i}\) is the current position of the EV worker \(w_{i}\) which can be different to spatial task location \(c_{s_{j}}\), \(e_{i}\) is the energy per unit range value in (\(kWh/km\)) that gives information about how much energy the EV consumes to drive a unit distance, \(r_{i}\) is the available range of electrical vehicle in \(km\) given by the remaining energy level in their batteries, and \(r_{i}^{min}\) is the minimum energy not to be exceeded after completing the task to ensure sufficient energy for traveling to a charging location. The energy required to perform task \(s_{j}\) by worker \(w_{i}\) is denoted by \(l_{ij}\). e-Uber provides that a list of tasks, called _recommendation list_, is sent out to each worker. Workers then submit bids to these tasks. The bid \(b_{ij}\in\mathcal{B}\) represents the cost asked by worker \(w_{i}\) to perform task \(s_{j}\), where \(\mathcal{B}\) is the set of all the bids submitted by workers.
Previous works in crowdsourcing and energy-sharing using EVs has generally assumed that workers would have complete access to the list of available tasks and would pick the best task for them or, conversely, the crowdsourcing platform would assign tasks to workers regardless of their preference. These assumptions are both undesirable. On the one hand workers have limited time and ability to go over potentially a very long list of tasks [2], and on the other hand workers may have different preferences on the tasks to complete. In this work, we recommend a limited list of relevant tasks to each worker based on their preferences. We model the preferences as follows. We denote by \(\alpha_{iz_{j}}\in[0,1]\) the probability that worker \(w_{i}\) bids for a task of type \(z_{j}\). These are called _bidding probabilities_. We assume that these probabilities are unknown and thus need to be learned over time by observing the workers' behavior.
## IV _e-Uber:_ Problem Formulation
Fig. 2 summarizes the steps involved in the e-Uber platform. e-Uber collects a list of tasks \(\mathcal{S}_{t}\) at time \(t\) as requested by task-requesters which need to be crowdsourced to the EV-based workers in \(\mathcal{W}_{t}\) (step \(1\)). The platform sends a personalized list of tasks to the workers based on their preferences (step \(2\)) to which they respond by submitting bids to the platform for the tasks (step \(3\)). Based on the received bids \(\mathcal{B}_{t}\) (step \(4\)), the platform uses reverse auction based algorithm to determine the winning bids \(\mathbf{q}^{*}\) along with final payment \(\mathbf{P}\) for winners (step \(5\)). Finally, the worker preferences are updated based on their feedback for the next time step (step \(6\)). Given the nature of the considered tasks, worker-task assignment is performed one-to-one.
As described above, the system involves solving two different problems. One is to recommend a set of tasks which maximizes the likelihood of generating the maximum number of bids, and thus improving the overall system performance. Another problem is to select the winning bids for task assignment and determine the final payment to crowdsource the tasks to the workers. These two problems are discussed below.
Fig. 2: Working mechanism of e-Uber
### _Preference-aware Optimal Task Recommendation Problem_
Our objective is to recommend a limited subset of tasks to each workers which maximizes the likelihood of bidding for these tasks, while avoiding to overwhelm workers with a list above their cognitive capabilities. We formalize this through the Preference-aware Optimal Task Recommendation (POTR) problem as follows. In short, the problem aims at maximizing the overall task bidding probabilities (hereafter referred interchangeably as preferences) while limiting the size of the recommended list to \(K\) as well as ensuring that each task is recommended to at least \(\psi\) workers.
maximize \[\sum_{w_{i}\in\mathcal{W}}\sum_{s_{j}\in\mathcal{S}}\alpha_{iz_{j}}x_ {ij}\] (1) s.t. \[\sum_{s_{j}\in\mathcal{S}}x_{ij}\leq K, \forall w_{i}\] (1a) \[\sum_{w_{i}\in\mathcal{W}}x_{ij}\geq\psi, \forall s_{j}\] (1b) \[\sum_{s_{j}\in\mathcal{S}}g(z_{j})x_{ij}\geq\frac{|V2G|}{|\mathcal{ S}|}K, \forall w_{i}\] (1c) \[l_{ij}x_{ij}\leq(r_{i}-r_{i}^{min})e_{i}, \forall w_{i},s_{j}\] (1d) \[x_{ij}=0,\text{ if }|c_{s_{j}}-c_{w_{i}}|>\lambda, \forall w_{i},s_{j}\] (1e) \[x_{ij}\in\{0,1\}, \forall w_{i},s_{j}\] (1f)
\[g(z_{j})=\begin{cases}1,&\text{if }z_{j}=2\\ 0,&\text{otherwise}\end{cases} \tag{2}\]
The objective function in Eq. (1) maximizes the sum of individual bidding probabilities for each worker's recommended tasks. The binary decision variable \(x_{ij}\in\{0,1\}\) is set to 1 if the task \(s_{j}\) is included in the list of worker \(w_{i}\). Constraint (1a) limits the length of each recommendation list to be less than \(K\). In constraint (1b), we ensure that each task is recommended to at least \(\psi=\left\lfloor\frac{|\mathcal{W}|K}{|\mathcal{S}|}\right\rfloor\) workers. Also, we ascertain that a minimum of \(\frac{|V2G|\times K}{|\mathcal{S}|}\) V2G tasks are also recommended to each workers in constraint (1c). Constraint (1d) requires the recommended tasks to consume no more than certain energy for each EV, ensuring that EV has sufficient energy after performing tasks to drive to charging location, if required. Finally, constraint (1e) ensures that only the tasks within \(\lambda\) distance from workers are recommended.
It is to be noted that the information on bidding probabilities is difficult to obtain _a priori_ as it is specific for each worker and include elements of complex human psychology. Therefore, we assume that the preferences are initially unknown and are learned by observing the workers' behavior with respect to the assigned tasks. Recently, reinforcement learning mechanisms have been used extensively to learn the optimal policies in the run-time that gradually converge to take optimal actions based on feedback from the environment. In section V, we present a _Combinatorial Multi-Armed Bandit (CMAB)_-based approach [2] that learns the preferences of workers over time while simultaneously recommending the optimal personalized list of tasks to them.
### _Winning Bid Selection and Final Payment Problem_
After sending the personalized list of tasks to each worker, _e-Uber_ collects the bids. Given the collected bids, e-Uber selects winning bids, i.e., the workers performing the tasks, by solving the Winning Bid Selection (\(WiBS\)) problem. This problem determines the best bids which minimize the total cost from perspective of task requesters. \(WiBS\) can then be formulated a costrained assignment problem as follows:
minimize \[\sum_{w_{i}\in\mathcal{W}}\sum_{s_{j}\in\mathcal{S}}b_{ij}q_{ij}\] (3) s.t. \[\sum_{s_{j}\in\mathcal{S}}q_{ij}\leq 1, \forall w_{i}\] (3a) \[\sum_{w_{i}\in\mathcal{W}}q_{ij}=1, \forall s_{j},z_{j}<2\] (3b) \[\sum_{w_{i}\in\mathcal{W}}q_{ij}\leq 1, \forall s_{j},z_{j}=2\] (3c) \[\sum_{w_{i}\in\mathcal{W}}\sum_{s_{j}\in\mathcal{S}}g(z_{j})l_{ij }q_{ij}\geq\mathcal{E},\] (3d) \[q_{ij}\in\{0,1\}, \forall w_{i},s_{j}\] (3e)
The objective function in Eq. (3) minimizes the total cost of performing tasks from the collected bids. \(q_{ij}\) is the binary decision variable as defined in constraint (3e) that indicates whether a bid \(b_{ij}\) wins the auction and therefore the task \(s_{j}\) is assigned to worker \(w_{i}\). Constraint (3a) ensures that a worker is assigned at most one task, while (3b) allows a ride-sharing and battery swapping tasks (\(z_{j}<2\)) to be assigned to only one worker. Similarly, constraint (3c) ensures that a V2G task is assigned to at most one worker. Finally, constraint (3d), ensures that at least \(\mathcal{E}\) amount of energy will be supplied through V2G services. Note that the function \(g(z_{j})=1\) if \(z_{j}=2\) (V2G task) and zero otherwise.
Following the selection of winning bids by solving the \(WiBS\) problem in Eq. (3), the final payment for each winning worker \(w_{k}\) assigned with task \(s_{j}\) is the second-to-the-selected bid received for that task. Since with the second price payment rule, the dominant strategy for all bidders is to bid truthful [28], it ensures rational workers will provide truthful bids.
**Theorem 1**.: \(WiBS\) _problem defined in Eq. (3) is NP-hard._
Proof.: We provide a reduction from NP-Hard 0-1 min Knapsack (0-1 min-KP) problem [29]. In this problem, a set \(n\) items is provided, each item \(a_{i}\) has a value \(l_{i}\) and weight \(b_{i}\). The goal is to select the subset of items that incurs minimum weight and has a value of at least \(\mathcal{E}\).
Given a generic instance of min-KP, we construct an instance of our problem as follows. We only consider V2G tasks (\(z_{j}=2\)). For each item \(a_{i}\) of min-KP we create a pair task-worker \((s_{a_{i}},w_{a_{i}})\). We assume that worker \(w_{a_{i}}\) only submits
one bid, and they bid for \(s_{a_{i}}\) for an amount \(b_{i}\) (the weight of \(a_{i}\) in min-KP). Additionally, the energy required by \(w_{a_{i}}\) to perform \(s_{a_{i}}\) is \(l_{i}\) (the value of \(a_{i}\) in min-KP). Finally, we set the energy requirement for V2G to \(\mathcal{E}\).
Under these assumptions, the decision variable \(q_{ij}\) of our original problem can be reduced to \(q_{i}\), since only one workers bid for one task and a task receives a bid only from one worker. Additionally, constraints (3a) and (3c) are trivially verified, since there is only task-worker pair, while constraint (3b) does not apply since we only have V2G tasks.
Solving our reduced problem instance finds the set task-worker pairs that minimize the sum of bids and meets the energy requirement \(\mathcal{E}\). This corresponds (i.e., it can be translated in polynomial time) to the optimal solution of min-KP, i.e., the set of items with minimum weight that provide a value at least \(\mathcal{E}\). As a result, our problem is at least as difficult as min-KP, and thus it is NP-Hard.
## V e-Uber Solution Approaches
### _CMAB-based Task Recommendation System_
In order to solve the optimization problem in Eq. (1), it is necessary to have beforehand knowledge on the workers preferences. These are generally _not known_ a priori in realistic settings. Therefore, it becomes necessary to learn these preferences during run-time, while simultaneously optimizing the task assignment. To this purpose, we propose a reinforcement learning approach inspired by the Combinatorial Multi-Armed Bandit (CMAB) framework [2, 30].
Combinatorial Multi-Armed Bandit is a classic reinforcement learning problem that consists of setup where agents can choose a combination of different choices (i.e. certain decision-making _actions_) and observe a combination of linear _rewards_ at each timestep. The long term objective for the problem is to find a strategy that maximizes such reward by selecting optimal actions. This strategy, better defined as _policy_, needs to be learned based on how the agents choose to interact with the system. The learning is carried out through _exploration vs. exploration_ trade-off. Since, at the beginning, the knowledge about how an agent chooses to engage with the system is not known, the system learns by allowing agent to choose from diverse options and therefore learning the user interaction accordingly, referred to as _exploration_. As the time passes, the system starts gathering information about agent's behavior and therefore use that knowledge instead of sending out diverse range of choices, called _exploitation_. By balancing this exploration and exploitation mechanism over the course of time, the system eventually gathers sufficient information on agent's behavior and learns optimal strategy for them. In our problem setting, the workers are the agents who needs to be sent out an optimal set of tasks so as to accumulate good quality bids from them. Specifically, the objective is to find the best possible task recommendations (actions) to be sent to each workers (agent) that will result in higher cumulative preferences for workers (reward).
Therefore in this section, based on this CMAB framework, we design an algorithm called _CMAB-based Algorithm for task Recommendation System (CARS)_. The pseudo code of _CARS_ is shown in Alg. (1). _CARS_ recommends the personalized tasks to each workers based on current estimation of worker preferences towards each task type. Note that the worker preference is defined as the bidding probability in section IV that a worker will submit a bid for any task based on its type. The algorithm then updates and learns these biding probabilities based on the worker's engagement on the recommendation through bids. If the worker submits a bid, it is considered to be a preferred recommendation and opposite, if the worker chooses to ignore by not the submitting bid. Based on this information, the preference of workers towards each task type is updated.
Therefore, with \(\mathcal{F}\) as the overall solution space that consists of all feasible action matrices, the action matrix \(\mathbf{A}(t)\in\mathcal{F}\) corresponds to the optimal set of recommendation lists for the timestep \(t\). It consists of action values \(x_{ij}\in\{0,1\}\), which is same as the decision variable in POTR problem. Recall that it represents whether the task \(s_{j}\) is in personalized recommendation list of worker \(w_{i}\) for timestep \(t\). Given this action matrix, the preference of worker \(w_{i}\) towards each task type \(z_{j}\) is modeled as a random variable \(\bar{\alpha}_{i_{z_{j}}}\) whose mean value is \(\alpha_{i_{z_{j}}}\) and is initially unknown. The current knowledge until timestep \(t\) for these random variables \(\bar{\alpha}_{i_{z_{j}}}\) is denoted by the estimated expected \(\widehat{\alpha}_{i_{z_{j}}}\). The reward for the platform for selecting the action matrix \(\mathbf{A}(t)\) at timestep \(t\), is defined as the sum of the preferences to each workers:
\[\mathbf{R}_{\mathbf{A}(t)}(t)=\sum_{w_{i},s_{j}}a_{ij}(t)\bar{\alpha}_{ij}(t) \tag{4}\]
Since the distribution of \(\bar{\alpha}_{i_{z_{j}}}\) is unknown, the goal of this CMAB-based approach is to learn the policy, that minimizes the overall _regret_ up to time \(t\). This regret is defined as the difference between expected reward with perfect knowledge of preferences and that obtained by the policy over time:
\[\mathcal{R}(t)=t\mathbf{R}_{\mathbf{A}(t)}^{*}(t)-\mathbb{E}\Big{[}\sum_{t^{ \prime}=1}^{t}\mathbf{R}_{\mathbf{A}(t^{\prime})}(t^{\prime})], \tag{5}\]
where \(\mathbf{R}_{\mathbf{A}(t)}^{*}(t)\) is the optimal reward obtained with perfect knowledge of the preference variables. Even though minimizing the regret is a difficult problem, \(CARS\) ensures that the regret is bounded, meaning the non-optimal actions will be picked only a limited number of times and eventually the learned policy will converge towards optimal. We present a modified objective function from UCB1 algorithm to select the action matrix as follows.
\[\mathbf{A}(t)=\arg\max_{\mathbf{A}\in\mathcal{F}}\sum_{w_{i}\in\mathcal{W}} \sum_{s_{j}\in\mathcal{S}}a_{ij}\left(\widehat{\alpha}_{i_{z_{j}}}+\sqrt{ \frac{(Q+1)\ln t}{m_{i_{z_{j}}}}}\right) \tag{6}\]
where \(Q=|\mathcal{W}|\times|z_{j}|\) is the total number of variables and \(m_{i_{z_{j}}}\) is the number of observations so far for the variable \(\bar{\alpha}_{i_{z_{j}}}\).
At each timestep \(t\), we solve the \(POTR\) problem with CMAB-based objective function in Eq. (6) instead of Eq. (1) and same constraints (1a)-(1f). By solving this modified problem, the sets of optimal actions (or recommendation lists) for each workers are selected based on current estimate of preferences until timestep \((t-1)\). For this purpose, we keep track of the \(\widehat{\alpha}_{iz_{j}}\), along with \(m_{iz_{j}}\). These two information are then used to update the current estimation of the variable \(\bar{\alpha}_{iz_{j}}\) at time \(t\) based on the worker's engagement with the recommendation i.e. whether the worker chooses to submit the bid or not. Needs to be noted that, if the worker chooses to submit the bid, they must complete the task if assigned.
\[\widehat{\alpha}_{iz_{j}}(t) =\begin{cases}\frac{\widehat{\alpha}_{iz_{j}}(t-1)m_{iz_{j}}(t-1)+ \alpha_{iz_{j}}(t)}{m_{iz_{j}}(t-1)+1}&\text{if }0<b_{ij}<\infty,\\ \widehat{\alpha}_{iz_{j}}(t-1)&\text{otherwise.}\end{cases} \tag{7}\] \[m_{iz_{j}}(t) =m_{iz_{j}}(t-1)+1 \tag{8}\]
We present the _CARS_ algorithm in Alg. 1. CARS begins by collecting information on workers and task in lines \(1-2\). It then sends out personalized recommendation to each worker by solving the optimization problem with Eq. (6) as objective function and constraints (1a)-(1f)(lines \(3-4\)). Then, it collects the bids for recommended tasks from workers (line \(5\)). Finally, the current knowledge on worker's bidding probabilities are updated according to the Eqs. (7) and (8) based on how the workers respond to recommendations (lines \(5-6\)). For the update process, the recommendations that receive a bid from workers are taken as positive reinforcement and the recommendations that do not receive bids as negative reinforcement. In the following, we prove that the Alg. 1 has a bounded regret and thus the algorithm eventually converges to optimal policy in finite time-steps.
```
1\(\forall w_{i}\in\mathcal{W}_{t}\), collect the workers info \(w_{i}=<c_{i},e_{i},r_{i},r_{i}^{min}>\) ;
2\(\forall s_{j}\in\mathcal{S}_{t}\), collect the tasks \(s_{j}=<z_{j},c_{j},d_{j}>\);
3\(\forall w_{i}\in\mathcal{W}_{t}\), collect their respective bids \(\mathcal{B}_{i}\) ;
4Build Bipartite Graph \(G=\{\mathcal{W}\cup\mathcal{S},E_{G}=\emptyset\}\) ;
5for each\(w_{i}\in\mathcal{W},s_{j}\in\mathcal{S}\)do
6if\(b_{ij}>0\)then Add edge \((w_{i},s_{j})\) to \(E_{G}\) with weight, \(b_{ij}\);
7
8 end if
9
10 end for
11
12 end for
13
14 end for
15
16 end for
17
18 end for
19\(\forall w_{k}\in\mathcal{W},P_{k}\leftarrow\) Second to the selected bid \(b_{kj}\);
20 Assign the tasks to winning workers along with final price \(\mathbf{P}\);
```
**Algorithm 1**CMAB-based Algorithm for task Recommendation System (CARS)
**Theorem 2**.: _eCARS provides bounded regret given by:_
\[\mathcal{R}(t)\leq\left[\frac{4a_{max}^{2}Q^{3}(Q+1)\ln(t)}{(\Delta_{min})^{2}} +\frac{\pi^{2}}{3}Q^{2}+Q\right]\Delta_{max}, \tag{9}\]
_where, \(a_{max}\) is defined as \(\max\limits_{\mathbf{A}\in\mathcal{F}}\max\limits_{i,j}a_{ij}\). Besides, \(\Delta_{min}=\min\limits_{\mathbf{R_{A}}<\mathbf{R^{*}}}(\mathbf{R^{*}}- \mathbf{R_{A}})\) and \(\Delta_{max}=\max\limits_{\mathbf{R_{A}}<\mathbf{R^{*}}}(\mathbf{R^{*}}- \mathbf{R_{A}})\) are the minimum and maximum difference to the reward obtained with perfect knowledge of the users' preferences, respectively._
Proof.: The proof is obtained following Theorem 2 of [30].
However, as shown in Theorem (1), finding optimal solution for winner determination problem (\(WiBS\) problem Eqs. (2)-(3e)) is NP-Hard problem. Therefore, we devise a bipartite matching-based heuristic for winning bid determination with polynomial time complexity for worker-task assignment.
### _Winning Bid Selection using Weighted Bipartite Matching_
The \(WiBS\) problem formulation in Eq. (3) is an extension of one-to-one weighted matching. However, this matching has to select minimum weighted edges for task allocation with energy budget constraints for V2G tasks. Therefore, we hereby develop a heuristic inspired by bipartite minimum weighted matching which can be solved in polynomial time using Karp's algorithm [31]. To satisfy the energy budget constraint, we employ iterative matching that removes the highest weighted edges from the previous matching until the budget is met. Simply put, the algorithm runs the minimum weighted matching and if it does not satisfy the budget constraints, removes first \(z\) highest weighted edges connected to non-V2G tasks from the previous matching and then runs another round of matching until the feasible solution is found.
```
Input : Sets of Workers (\(\mathcal{W}\)) and Spatial Tasks (\(\mathcal{S}\)), Bids (\(\mathcal{B}\)) Output : Winning bids with final pay (\(\mathbf{P}\)) / Initialization */
1\(\Phi_{out}=\{\mathcal{W}\cup\mathcal{S},E_{\Phi}=\emptyset\};\Phi_{temp}= \emptyset;P=\emptyset\) ; / Generate bipartite graph \(G\) */
2\(\forall s_{j}\in\mathcal{S}\), if \(g(z_{j})=1\)then\(V\leftarrow\{s_{j}\}\)else\(R\leftarrow\{s_{j}\}\);
3\(\forall w_{k}\in\mathcal{W}\), collect their respective bids \(\mathcal{B}_{i}\) ;
4 Build Bipartite Graph \(G=\{\mathcal{W}\cup\mathcal{S},E_{G}=\emptyset\}\) ;
5for each\(w_{i}\in\mathcal{W},s_{j}\in\mathcal{S}\)do
6\(\mathbf{I}\) if\(b_{ij}>0\)then Add edge \((w_{i},s_{j})\) to \(E_{G}\) with weight, \(b_{ij}\);
7
8 end if
9\(\mathbf{\ast}\) Run minimum weighted brt matching until termination */
10while\((w_{i},s_{j})\in E_{out}\)do
11\(E_{out}\leftarrow\)Perform Minimum Weighted Bipartite Matching on \(G\);
12 Output graph \(\Phi_{out}=\{\mathcal{W}\cup\mathcal{S},E_{out}\}\), where \(E_{out}\subseteq E_{G}\) ; / Remove edges if V2G energy budget is not met, and run MWM on reduced \(G\) again */
13if\(\sum\limits_{(w_{i},s_{j})\in E_{out}}g(z_{j})l_{ij}<\mathcal{E}\)then
14\(Z\leftarrow\)Select the first \(z\) highest weight edges \(\in\Phi_{out}\) and \(R\) ;
15if\(Z\neq\emptyset\)then Remove all edges \(\in Z\) from \(G\) and \(\Phi_{out}\) else\(\Phi_{temp}=\Phi_{out}\);
16
17 end if
18
19 end for
20\(\mathbf{q}^{+}=E_{out}\);
21
22\(\mathbf{\ast}\) Final Payment and Task Assignment */
23\(\forall w_{k}\in\mathcal{W},P_{k}\leftarrow\) Second to the selected bid \(b_{kj}\);
24 Assign the tasks to winning workers along with final price \(\mathbf{P}\);
```
**Algorithm 2**Bipartite Matching-based Winner selection (BMW)
This algorithm called _Bipartite Matching-based Winner selection (BMW)_ is presented in Alg. 2. \(BMW\) takes set of available workers \(\mathcal{W}\), tasks \(\mathcal{S}\), and the set of bids \(\mathcal{B}\) as input and finds the winning bids with final pay \(P\) as the output. In line \(1\), the algorithm initializes the output graph \(\Phi_{out}\), a
temporary graph \(\Phi_{temp}\) for iterative matching purpose, and \(P\). Then it creates a separate sets for V2G and non-V2G tasks as sets \(V\) and \(R\) in line \(2\) and collects the bids from all workers (line 3). With the information on bids, \(BMW\) generates a bipartite graph \(G\) between bipartite sets of workers \(\mathcal{W}\) and tasks \(\mathcal{S}\), and adds edges between those nodes that have non-zero bids i.e. worker \(w_{i}\) with non-zero bid \(b_{ij}\) is connected with task \(s_{j}\) (lines \(4-7\)). Now, it runs a bipartite matching iteratively with while loop in lines \(8-15\). Initially, both of the conditions for while loop are true and therefore the algorithm runs first round of Minimum Weighted Bipartite Matching on graph \(G\) (line \(9\)). It then assigns the matched graph to the output graph \(\Phi_{out}\) (line \(10\)) and checks if the energy budget for V2G tasks is satisfied (line \(11\)). If it is met in the first round, it breaks out of the while loop and determines final payment and task assignment. If it is not met, BMW removes the first \(z\) highest weighted edges in \(\Phi_{out}\) from \(G\) that just meet the remaining of energy budget not met (line \(12-13\)). Then, since both of the conditions are still true, the algorithm runs another round of matching on reduced graph \(G\). Eventually the final matching in output graph \(\Phi_{out}\) is used as winning task assignments with final payment as per the bid (line \(16-18\)).
**Theorem 3**.: _The time complexity of the \(BMW\) algorithm is \(O(|\mathcal{W}|.|\mathcal{S}|^{2}.log(|\mathcal{S}|))\)._
Proof.: The complexity is dominated by the \(while\) loop (lines \(10-17\)), executed at max \(|\mathcal{S}|\) times. It involves running minimum weighted full matching as presented in [31], which has run time of \(O(|\mathcal{W}|.|\mathcal{S}|.log(|\mathcal{S}|))\). Therefore, the overall complexity of the BMW is \(O(|\mathcal{W}|.|\mathcal{S}|^{2}.log(|\mathcal{S}|))\).
## VI Experiment
In this section, we present the experimental details for the proposed system, comparison approaches and detailed study of performance of the algorithms.
### _Experimental Setup_
Our experimental setup consists of modeling workers, tasks and the simulation platform. In case of workers, we gathered the publicly available data on \(54\) different EV models on battery size, range, charging power and charging speed, and formulated an individual profile for each EV in concern. Similarly for ride-sharing tasks, the high volume taxi trip data of New York City (NYC) from the year of 2013 [32] was used. The V2G tasks were generated from the 15 minutes energy consumption data from 25 NYC residences from PecanStreet [33]. In absence of real dataset on battery swapping tasks, half of the ride-sharing tasks were extracted as the battery swapping tasks, given their similar profile with batteries transported instead of passengers. These tasks are spatial, therefore, we collect the information on locations, distance, and time required to complete the tasks.
Furthermore, the simulation platform, e-Uber for crowdsourcing is developed using Python and Gurobi, NetworkX, and PyTorch libraries. We consider a reverse auction period resolution of 15 minutes which corresponds to the standard set by grid for energy trading. This means that every \(15\) minutes the e-Uber algorithm will gather the tasks, push the personalized list of tasks to workers, collect the bids and assign the tasks to EV workers that minimizes the overall cost for the task requesters. We set the search radius for the tasks \(\lambda=10\) km and the maximum length of recommendation list \(K=5\). The energy budget for each \(15\) minutes time period was considered to be total of all \(25\) V2G tasks available. The user preferences were sampled uniformly from the set \(\{0.1,0.4,0.5,0.7,0.9,1.0\}\). The energy, time and location of the EVs are tracked and updated accordingly so as to simulate their real-world trip behavior. If the battery level of the cars fall below minimum level, they are considered for the charging for the next time-step.
For comparison approach, we use the task-centric winner selection algorithm as presented in [23] and refer it as \(BG\) for baseline greedy. This approach neither considers user-preference in the problem-setting nor it considers the personalized recommendation system. So for comparison purpose, we augment this method with perfect knowledge-based recommendation system that pushes \(K\) best tasks as recommendation to each workers. Then we implement the algorithm as presented in [23] that sorts the bids from lowest to highest for each tasks and assigns them one by one. Note that this approach may not guarantee a complete matching between workers and tasks as the tasks that are processed towards the end may not have any workers left to choose from because of limited number of bids and greedy selection approach. We use this \(BG\) as our baseline and compare the performance of our algorithms \(CARS\) and \(BMW\) along with their perfect knowledge variation \(PK\) which has the perfect knowledge on the worker preferences and thus do not involve learning, and \(OPT\) optimal solution to \(WiBS\) problem. The ride-share dataset in concern consists of actual ride-fare for specific car. However, we require bids from each vehicle for recommended tasks and a realistic model for bid generation is quite difficult to obtain. Therefore we trained a Deep Neural Network with existing dataset for determining the ride fare of the given ride-sharing tasks, the details of which is presented in the following.
### _Results_
**Bid Generation DNN Model**
We used 11 months of taxi data to train and test the DNN model with 80-20 train-test split. The DNN model consisted
of 3 hidden layers of sizes \((132,132,64)\). We employed ReLU activation function as well as one-hot encoding for the input features, and set the learning rate to 0.0001. The training was carried out for \(3\) epochs with \(7974\) training batches and batch size of \(64\). Consequently, the average training loss curve presented in Fig. 3, shows that the loss percentage reduces to \(\sim 2.5\%\) after \(\sim 12,000\) trainings. On testing dataset, the bid generation DNN model, generated highly accurate fare prediction with \(96.45\%\)\(R^{2}-\)score. This can also be observed in Fig. 4 which presents a plot of sample of prediction fares and actual fares to show testing accuracy.
This DNN model was then deployed in conjunction with the e-Uber to simulate the bidding action by each workers for each recommended tasks in the personalized list. In case of V2G tasks, the energy to be supplied by the EV was converted into its distance equivalent and fed into the DNN model along with other input features to get the bids.
**Experimental Observations**
1. **Performance over time - Total Cost & # of Tasks**: In the first experimental scenario, we observe the performance of algorithms as a snapshot of objective values over 24 hours (i.e. \(24\times 4=96\) timeslots). We present the objective values from midnight to next midnight as a lineplot in Fig. 5 and cumulative bar plots of objective values (Fig. 6) and total tasks completed (Fig. 7) over a day. Although all the proposed approaches start from the same initial state (except for knowledge on preference), these algorithms may have different successive states since the solution is affected by the matching in previous timeslot, availability of specific workers for next round, and the distance travelled by these workers for previous assignment (or next assignment). Therefore, we employ cumulative objective values and cumulative tasks completed as the metric for a fair comparison of the approaches in Fig. 6. This cumulative objective value reflects the overall quality of task assignment made so far based on the total objective values to achieve the requirement while the cumulative tasks completed present the total number of matches made by the respective approach until the end of that timeslot. As seen in the lineplot Fig. 5 and barplot Fig. 6, the solution generated by baseline greedy approach \(BG\) is the minimum one as it assigns task based on respective cheapest bid available but it doesn't meet the maximum number of matching possible unlike other approaches as shown in Fig.7. Therefore, \(BG\) mostly violates the V2G requirement, meaning it generates infeasible solutions and hence fails for this problem setting. The \(PK-OPT\) produces the best result since it involves solving the \(POTR\) and \(WiBS\) problem optimally with perfect knowledge of the worker preferences. Following it, is the optimal solution \(OPT\) paired with our proposed learning framework for e-Uber, \(CARS\), which performs close to optimal in terms of both objective values and number of tasks completed. Although this approach \(CARS-OPT\) finds optimal solution, it does not have initial knowledge on preferences. Therefore, it generates sub-optimal recommendation list which then affects the solution to \(WiBS\) problem and hence, the overall performance. However, even with online learning framework employed, it produces similar results to the \(PK-OPT\). Also we observe similar pattern with \(PK-BMW\) and \(CARS-BMW\) since they both rely on bipartite matching-based approach to find feasible solution. Since \(PK-BMW\) sends the optimal recommendation to workers for collecting bids, it therefore has higher overall performance compared to \(CARS-BMW\) which learns the preferences over time. The gaps between best performing \(PK-OPT\) and worst performing \(CARS-BMW\) however is less than \(\$150\) which amounts to a price hike of \(\sim\$3/\text{task}\) in the worst case with an average \(50\) tasks for a timeslot as in our case. We observe the cumulative objective values grow almost linearly for all approaches and as expected, the performance observed was better for \(PK-OPT\) followed by \(CARS-OPT\) and then \(PK-BMW\) and finally \(CARS-BMW\). However, the gap in cumulative objective value increased for the bipartite heuristic compared to optimal due to its sub-optimal performance. Note that the baseline \(BG\) generates less cumulative objective value but it fails to generate maximal matching as seen in Fig. 7. The number of tasks completed by the proposed approaches exceed 850 more than the \(BG\) in the span \(24\) hours.
2. **Average final price per task and scaling**: In this experiment, we track the average final price per task while scaling the available tasks from \(32\%\) to \(64\%\) and then at \(100\%\). For scaling the tasks, we increase the number of each type of tasks proportionally. The result is plotted in Fig. 8. As the system
scales, the average final price per task for all approaches rises since the overall cost for the system also increases with the tasks. However, it is also observed that \(CARS-BMW\) and \(BMW-PK\) suffer more as we scale the system. The margin between these and optimal approaches grows drastically up to \(\sim\$2\). This can be attributed mainly to the increased complexity of the problem as number of tasks is increased and hence the bipartite matching-based heuristic finds less efficient solution compared to optimal. The optimal solutions however have nominal increase in their average price per task (\(\sim\$10\)) even with scaling compared to rest.
We also study the effect of scaling V2G tasks to the average final price per task in Fig. 10. We observed similar trend to above but with noticeable gap between optimal and heuristic approaches when only \(32\%\) of V2G tasks are available. This results from the sub-optimal performance owing to less number of V2G tasks compared to rest and hence unequal rate of learning the preferences.
3. **Learning accuracy for preferences - MAE**: To study the quality of proposed CMAB-based learning algorithm \(CARS\) in conjunction with optimal and \(BMW\), we use the Mean Absolute Error (MAE) of the learned preferences over time and present them in Fig. 10. Both approaches use same learning algorithm but the solution to \(WiBS\) problem differs and thus affects the learning performance. However, this difference is very negligible. Initially, the MAE is \(0.28\) and then rapidly decreases to less than \(0.05\) for both approaches by \(250\) timesteps. The difference in learning efficacy between \(CARS-OPT\) and \(CARS-BMW\) reduces over the time and is almost same by \(250\) timesteps as seen in the graph. Since by \(500\) timesteps the system has garnered sufficient knowledge on workers preferences, MAE falls to \(0.03\) reflecting the efficacy of proposed CMAB-based preference learning.
Furthermore, we present a cumulative reward plot in Fig. 11 that also shows the plots of both learning approaches converge after \(200\) timesteps.
4. **Dependency with \(K\)**: In this experiment, we discuss on the dependency of the performance of our proposed approach with recommendation length \(K\), as presented in Fig. 12. Increasing the number of recommendation \(K\) means that the chance of receiving more bids with good quality from same number of workers at the same time increases. This in turn helps to find better solutions which reduce overall cost of the system. This is also verified from the observation in plot of Fig. 12. As we increase \(K\), the objective values per task over a day's period reduces for all four approaches. Although the perfect and optimal optimal methods do not have significant difference in their performance with varied \(K\), the effect is more pronounced in case of bipartite matching based \(PK-BMW\) and \(CARS-BMW\) where the learning of preferences is benefited by the increased number of bids to choose from with increasing \(K\). However, it needs to be noted that pushing \(10\) recommends at each timestep can be very intractable for workers and therefore, keeping the length of recommendation list as small as possible is desired.
## VII Conclusion
e-Uber is a promising crowdsourcing platform for improving the efficiency and sustainability of ride-sharing and energy-sharing services through the use of EVs. It uses reverse auction mechanism to assign spatial tasks to EV drivers based on their preferences, battery level, and other realistic constraints like minimum energy requirement for grid and one-to-one assignment. To optimize the task recommendation process, the platform incorporates user behavioral models including worker preferences and bounded rationality. However, as these preferences are not known _a priori_, e-Uber uses reinforcement learning framework called combinatorial multi-armed bandit for learning the preferences at the runtime based on their feedback. We propose the \(CARS\) algorithm that finds optimal solution to both the \(POTR\) and \(WiBS\) problem. Since the \(WiBS\) problem is NP-hard, we propose another bipartite matching-based heuristic, called \(BMW\) that finds feasible solution to the winner selection while meeting the minimum
V2G energy requirement. Experimental results and simulations demonstrate the effectiveness of e-Uber's approaches, which outperform the baseline algorithm by serving more than 850 tasks within \(24\) hours of simulation. On top of that, the baseline often fails to find a feasible solution, rendering it inapplicable in this problem setting.
Future research could focus on implementing and evaluating e-Uber in real-world settings. This includes the assessment of the impact of different task recommendation and decision prediction algorithms, as well as the integration of new features such as real-time traffic and energy data and dynamic pricing. By exploring these areas, e-Uber has the potential to significantly improve the efficiency and sustainability of ride-sharing and energy-sharing services through the use of EVs.
## Acknowledgment
This work is supported by the NSF grant EPCN-1936131 and NSF CAREER grant CPS-1943035.
|
2302.14816 | Monocular Depth Estimation using Diffusion Models | We formulate monocular depth estimation using denoising diffusion models,
inspired by their recent successes in high fidelity image generation. To that
end, we introduce innovations to address problems arising due to noisy,
incomplete depth maps in training data, including step-unrolled denoising
diffusion, an $L_1$ loss, and depth infilling during training. To cope with the
limited availability of data for supervised training, we leverage pre-training
on self-supervised image-to-image translation tasks. Despite the simplicity of
the approach, with a generic loss and architecture, our DepthGen model achieves
SOTA performance on the indoor NYU dataset, and near SOTA results on the
outdoor KITTI dataset. Further, with a multimodal posterior, DepthGen naturally
represents depth ambiguity (e.g., from transparent surfaces), and its zero-shot
performance combined with depth imputation, enable a simple but effective
text-to-3D pipeline. Project page: https://depth-gen.github.io | Saurabh Saxena, Abhishek Kar, Mohammad Norouzi, David J. Fleet | 2023-02-28T18:08:21Z | http://arxiv.org/abs/2302.14816v1 | # Monocular Depth Estimation using Diffusion Models
###### Abstract
We formulate monocular depth estimation using denoising diffusion models, inspired by their recent successes in high fidelity image generation. To that end, we introduce innovations to address problems arising due to noisy, incomplete depth maps in training data, including _step-unrolled denoising diffusion_, an \(L_{1}\) loss, and depth infilling during training. To cope with the limited availability of data for supervised training, we leverage pre-training on self-supervised image-to-image translation tasks. Despite the simplicity of the approach, with a generic loss and architecture, our _DepthGen_ model achieves SOTA performance on the indoor NYU dataset, and near SOTA results on the outdoor KITTI dataset. Further, with a multimodal posterior, DepthGen naturally represents depth ambiguity (e.g., from transparent surfaces), and its zero-shot performance combined with depth imputation, enable a simple but effective text-to-3D pipeline. Project page: [https://depth-gen.github.io](https://depth-gen.github.io)
Machine Learning, ICML
## 1 Introduction
Diffusion probabilistic models have emerged as a powerful family of generative models for high fidelity image synthesis, capturing remarkably rich knowledge about the visual world (Sohl-Dickstein et al., 2015; Ho et al., 2020; Ramesh et al., 2022; Saharia et al., 2022). Given their impressive generative capabilities, it is natural to ask to what extent are these models effective for image to image vision tasks like segmentation, optical flow or depth estimation? Here, we adapt diffusion models to the problem of monocular depth estimation and investigate their effectiveness in the context of a large body of prior work. We demonstrate state of the art performance on benchmarks while also enabling multi-modal inference to resolve depth ambiguities, and exploiting depth imputation for text to 3D generation.
Two key issues in training diffusion models for monocular depth inference concern the amount and quality of available training data. First, much of the existing data is noisy and incomplete (e.g., see Figs. 3 and 9). This presents a challenge for the conventional training framework and iterative sampling in diffusion models, leading to a problematic distribution shift between training and testing. To mitigate these issues we propose the use of an \(L_{1}\) loss for robustness, infilling missing depth values during training, and the introduction of _step-unrolled denoising diffusion_.
Given the limited availability of labelled training data, we also consider the use of self-supervised pre-training. This leverages the strong performance of diffusion models on tasks like colorization and inpainting (e.g., Saharia et al., 2022), capturing rich image structure that may transfer to other tasks. Accordingly, we propose a training pipeline comprising multi-task self-supervised pre-training followed by supervised fine-tuning. The model can then be used zero-shot or it can be further fine-tuned for a specific domain.
The resulting model, _DepthGen_, outperforms SOTA baselines on the indoor NYU dataset, and is competitive on KITTI. Ablations show that unsupervised pre-training, depth infilling, the \(L_{1}\) loss, and step-unrolled denoising diffusion all significantly improve performance. As a probabilistic model, DepthGen has other attractive properties: With its ability to represent multi-modal distributions, we find that it can resolve depth ambiguities, e.g., due to reflective or transparent surfaces. Given the ease of imputation with diffusion models, DepthGen can also be used to infer missing depth values. We exploit this property, along with its zero-shot capability and existing text to image models to build a simple but effective framework for text to 3D scene generation and novel view synthesis.
In summary, our contributions are as follows:
1. We introduce DepthGen, a diffusion model for monocular depth estimation, comprising self-supervised pre-training and supervised fine-tuning. Without specialized loss functions or architectures, it achieves SOTA relative error of 0.074 on the NYU benchmark.
2. To train diffusion models on noisy, incomplete depth data, we advocate the use of an \(L_{1}\) loss, depth infilling, and step-unrolled denoising diffusion (SUD) to reduce latent distribution shift between training and inference.
3. We show that DepthGen enables multimodal depth inference, and imputation of missing depths e.g, for text-to-3D generation and novel view synthesis.
## 2 Related Work
**Monocular depth estimation** is essential for many vision applications (Jing et al., 2022; Zhou et al., 2019). And recent progress has been impressive, with the development of specialized loss functions and architectures (e.g., Saxena et al., 2005, 2009; Eigen et al., 2014; Eigen and Fergus, 2014; Laina et al., 2016; Cao et al., 2016; Fu et al., 2018; Bhat et al., 2021; Li et al., 2022; Agarwal and Arora, 2022). We build on this rich literature, but with a simple, generic architecture, leveraging recent advances in generative models.
Prior work has shown that self-supervised tasks like colorization (Zhang et al., 2016; Larsson et al., 2016) serve as effective pre-training for downstream vision tasks. This motivates the choice to initialize our model with Palette (Saharia et al., 2022) style multi-task pre-training on the self-supervised image-to-image translation tasks. Self-supervised training using masked prediction has also recently been found to be particularly effective (Xie et al., 2022), with subsequent work, concurrent to ours, establishing the current SOTA (Ning et al., 2023). Our findings also support self-supervised pre-training, albeit with diffusion-based image-to-image translation, and we establish a new SOTA while also representing multi-modality and supporting zero-shot depth completion.
Large-scale in-domain pre-training has also been effective for depth estimation (Ranftl et al., 2019, 2021; Ren and Lee, 2017), which we find to be the case here as well.
**Diffusion models** have excelled at image generation, including unconditional and class-conditional generation (Dharilwal and Nichol, 2022; Ho et al., 2022), image-to-image translation (Saharia et al., 2022;a), text-to-image synthesis (Rombach et al., 2022; Ramesh et al., 2022; Nichol et al., 2021; Saharia et al., 2022), and text-guided image editing (Brooks et al., 2022; Wang et al., 2022; Hertz et al., 2022; Meng et al., 2021). Despite this success, they have not been widely applied to vision tasks, except for recent work on image enhancement (Saharia et al., 2022), used here for pre-training, and work on panoptic segmentation (Chen et al., 2022). To the best of our knowledge, ours is the first to apply diffusion models to monocular depth estimation.
Also related to our work are diffusion models for view synthesis from multi-view image data (Watson et al., 2022), generative models for point cloud data (Nichol et al., 2022), text-to-3D generative models (Poole et al., 2022) and models for depth-aware novel view synthesis (Rockwell et al., 2021; Liu et al., 2021). While work on 3D generative models is exciting, our primary interest here is monocular depth estimation.
## 3 DepthGen
### Background
Diffusion models are latent-variable generative models trained to transform a sample of a Gaussian noise into a sample from a data distribution (Sohl-Dickstein et al., 2015; Ho et al., 2020). They comprise a _forward process_ that gradually annihilates data by adding noise, as 'time' \(t\) increases from 0 to 1, and a learned _generative process_ that reverses the forward process, starting from a sample of random noise at \(t=1\) and incrementally adding structure (attenuating noise) as \(t\) decreases to 0. A conditional diffusion model conditions the steps of the reverse process. In the case of depth estimation, our conditioning signal is an RGB image, \(\mathbf{x}\), and the target is a conditional distribution over depth maps, \(p(\mathbf{y}\,|\,\mathbf{x})\).
Central to the model is a denoising network \(f_{\theta}\) that is trained to take a noisy sample at some time-step \(t\), and predict a less noisy sample. Using Gaussian noise in the forward process, one can express the training objective over the sequence of transitions (as \(t\) slowly decreases) as a sum of non-linear regression losses, i.e.,
\[\mathbb{E}_{(\mathbf{x},\,\mathbf{y})}\,\mathbb{E}_{(t,\,\mathbf{\epsilon})}\bigg{\|}f_{ \theta}(\mathbf{x},\underbrace{\sqrt{\gamma_{t}}\,\mathbf{y}+\sqrt{1-\gamma_{t}}\, \mathbf{\epsilon}}_{\mathbf{y_{t}}},\,t)-\mathbf{\epsilon}\bigg{\|}_{2}^{2} \tag{1}\]
where \(\mathbf{\epsilon}\sim\mathcal{N}(0,I)\), \(t\sim\mathcal{U}(0,1)\), and where \(\gamma_{t}>0\) is computed with a pre-determined noise schedule. For inference (i.e., sampling), one draws a random noise sample \(\mathbf{y}_{1}\), and then iteratively uses \(f_{\theta}\) to estimate the noise, from which one can compute the next latent sample \(\mathbf{y}_{s}\), for \(s<t\).
Figure 1: Training Architecture. Given a groundtruth depth map, we first infill missing depth using nearest neighbor interpolation. Then, following standard diffusion training, we add noise to the depth map and train a neural network to predict the noise given the RGB image and noisy depth map. During finetuning, we unroll one step of the forward pass and replace the groundtruth depth map with the prediction.
### Self-Supervised Pre-Training
DepthGen training comprises self-supervised pre-training, then supervised training on RGB-D data. The pre-trained model is a self-supervised multi-task diffusion model. Following (Sahara et al., 2022), we train a Palette model from scratch on four image-to-image translation tasks, i.e., colorization, inpainting, uncropping and JPEG artifact removal.
### Supervised training with noisy, incomplete depth
Following pre-training, and with minor modifications to the architecture (see Sec. 4.2), training continues on paired RGB and depth data. While straightforward conceptually, the training datasets available for depth estimation present substantial challenges. The depth maps in particular are noisy and often contain regions with missing depth values. The various causes for such _holes_ are due to highly reflective surfaces, light absorbing surfaces (Stommel et al., 2014) or regions outside the sensor's range of measurement. Holes are largely inconsequential for simple feed-forward nets or regression models, since one could only backpropagate the loss from the subset of pixels with known depth values, ignoring those with missing depth. For diffusion models, however, such corruption of the training data is problematic.
Diffusion models perform inference through iterative refinement - in our case, of a depth map \(\mathbf{y}\) conditioned on an RGB image \(\mathbf{x}\). It starts with a sample of Gaussian noise \(\mathbf{y}_{1}\), and terminates with a sample from the predictive distribution \(p(\mathbf{y}_{0}\,|\,\mathbf{x})\). A refinement step from time \(t\) to \(s\), with \(s\!<\!t\), proceeds by sampling from the parameterized distribution \(p_{\theta}(\mathbf{y}_{s}\,|\,\mathbf{y}_{t},\mathbf{x})\). Simply put, during inference, each step operates on the output from the previous step. In contrast, at training the different steps are somewhat decoupled (see Eqn. 1), where the denoising network operates on a noisy version of the ground truth depth map instead of the output of the previous iteration (reminiscent of teaching forcing in training RNNs). This introduces a distribution shift between training and inference, since the marginal distribution over noisy training depth maps _with holes_ may differ significantly from the distribution of noisy depths at inference time, which should ideally (since we do not learn the distribution of holes in the loss) be a noisy version of the _true_, complete depth maps (from a perfect, noiseless sensor for instance). This has a significant negative impact on model performance. This problem is further exacerbated by structured or heavy-tailed noise in training depth maps.
We find that these problems are effectively mitigated with the following modifications during training:
**Depth interpolation.** To reduce distribution shift between training and inference we impute missing depth values. We explored several ways to accomplish this, including various interpolation schemes, and using DepthGen (trained with nearest neighbor interpolation infilling) to infill missing depth. But, empirically, we found that two straightforward steps performed as well as more sophisticated approaches. In particular, we find that nearest neighbor interpolation is sufficient to impute missing depths in indoor training data. For outdoor data we continue to use nearest neighbor interpolation, except for sky regions, as they are often large and are much further from the camera than adjacent objects in the image. We use an off-the-shelf sky segmenter (Liba et al., 2020), and then set all sky pixels to be the maximum modeled depth (here, 80m). Despite the imputation of missing depths, we note that the training loss is only computed at pixels with known (vs infilled) depth.
**Step-unrolled Denoising Diffusion.** Another approach to tackling the distribution shift in the latent marginal distribution of \(y_{t}\) between training and inference is to construct \(y_{t}\) using the model's output instead of the ground truth depth. One can do this by slightly modifying the training procedure (Algorithm 1) to run one forward pass of the model and build \(y_{t}\) by adding noise to the model's output rather than the training depth map. We do not propagate gradients for this forward pass. We find that this slows down training by about 15% on a TPU v4. We refer to this as _step-unrolled denoising diffusion (SUD)_.
```
Input: rgb image \(x\), depth map \(y\) \(t\gets U(0,1)\) \(\epsilon\gets N(0,1)\) \(valid\_mask=y>0\) \(y=fill\_holes(y)\) \(y_{t}=\sqrt{\gamma_{t}}*y+\sqrt{1-\gamma_{t}}*\epsilon\) if\(unroll\_step\)then \(\epsilon_{pred}=stop\_grad(f_{\theta}(x,y_{t},t))\) \(y_{pred}=(y_{t}-\sqrt{1-\gamma_{t}}*\epsilon_{pred})/\sqrt{\gamma_{t}}\) \(y_{t}=\sqrt{\gamma_{t}}*y_{pred}+\sqrt{1-\gamma_{t}}*\epsilon\) \(\epsilon=(y_{t}-\sqrt{\gamma_{t}}*y)/\sqrt{1-\gamma_{t}}\) endif \(\epsilon_{pred}=f_{\theta}(x,y_{t},t)\) \(loss=reduce\_mean(|\epsilon-\epsilon_{pred}|[valid\_mask])\)
```
**Algorithm 1** Train step w/ infilling and SUD.
We perform SUD during fine-tuning only, not during supervised depth pre-training. Early in training the depth predictions are likely inaccurate. So the latent marginals over the noisy training depth maps would be much closer to the desired _true_ marginals than those produced by adding noise to the model's outputs. Hence, doing SUD early in supervised pre-training is not recommended. One might consider the use of a curriculum for gradually introducing SUD in the later stages of supervised pre-training, but this also introduces additional hyper-parameters, so we simply invoke SUD during fine-tuning, and leave an exploration of curricula to future work.
This problem of training / inference distribution shift resembles that of _exposure bias_(Ranzato et al., 2016) in autoregressive models, for which the mismatch is caused by _teacher forcing_ during training (Williams and Zipser, 1989). Several solutions have been proposed for this problem in the literature (Lamb et al., 2016; Yu et al., 2016; Bengio et al., 2015). SUD also closely resembles the approach in (Savinov et al., 2022) where they perform step-unrolling for training denoising autoencoders on text.
Finally, we note that (Ning et al., 2023) faced a similar problem when training a vector-quantizer on depth data. They work around it by synthetically adding more holes following a carefully chosen masking ratio. In comparison, we prefer our approach since nearest neighbor infilling is hyper-parameter free and step-unrolled denoising diffusion could be more generally applicable to other tasks with sparse data.
\(L_{1}\)**Loss.** While the \(L_{2}\) loss in Eqn. 1 is appropriate for noise-free training data with additive Gaussian noise, good performance has been reported with an \(L_{1}\) loss during training for image-to-image translation models (Saharia et al., 2022). Given the possibility of substantial noise in depth data, especially for large depths and near holes, we hypothesize that the robustness afforded by the \(L_{1}\) loss may also be useful in training RGB-to-depth diffusion models.
## 4 Experiments
### Datasets
For unsupervised pre-training, we use the ImageNet-1K (Deng et al., 2009) and Places365 (Zhou et al., 2017) datasets and train on the self-supervised tasks of colorization, inpainting, uncropping, and JPEG decompression, following (Saharia et al., 2022).
**Indoor model.** For supervised image-to-depth pre-training of the indoor model we use the following two datasets (with dataset mixing at the batch level):
_ScanNet_(Dai et al., 2017) is a dataset of 2.5M images captured using a Kinect v1-like sensor. It provides depth maps at \(640\times 480\) and RGB images at \(1296\times 968\).
_SceneNet RGB-D_(McCormac et al., 2016) is a synthetic dataset of 5M images generated by rendering ShapeNet (Chang et al., 2015) objects in scenes from SceneNet (Handa et al., 2015) at a resolution of \(320\times 240\).
For indoor fine-tuning and evaluation we use _NYU depth v2_(Silberman et al., 2012), a commonly used dataset for evaluating indoor depth prediction models. It provides aligned image and depth maps at \(640\times 480\) resolution. We use the official split consisting of 50k images for training and 654 images for evaluation. The predicted depth maps from our model are resized to the full resolution using bilinear up-sampling before evaluation. We evaluate on a cropped region proposed by (Eigen et al., 2014) following prior work.
**Outdoor model.** For outdoor model training we use the _Waymo Open Dataset_(Sun et al., 2020), a large-scale driving dataset consisting of about 200k frames. Each frame provides RGB images from 5 cameras and LiDAR maps. We use the RGB images from the FRONT, FRONT_LEFT and FRONT_RIGHT cameras and the TOP LiDAR only to build about 600k aligned RGB depth maps.
For subsequent fine-tuning and evaluation, we use _KITTI_(Geiger et al., 2013), an outdoor driving dataset which provides RGB images and LiDAR scans at resolutions close to \(1226\times 370\). We use the training/test split proposed by (Eigen et al., 2014), comprising 26k training images and 652 test images. The predicted depth from DepthGen is up-sampled to the full resolution using bilinear interpolation before evaluation. We evaluate on a cropped region proposed in (Garg et al., 2016) following prior work.
**Data Augmentation and Preprocessing** We use random horizontal flip data augmentation for supervised depth training which is common in prior work. Where needed, images and depth maps are resized using bilinear interpolation to the model's resolution for training. Diffusion models expect inputs and generate outputs in the range \([-1.,1.]\). For the indoor model we use a max depth of 10 meters, and for the outdoor model we normalize the depth maps to the range with a maximum depth of 80 meters.
### Architecture
The predominant architecture for diffusion models is the U-Net developed for the DDPM model (Ho et al., 2020), and later improved in several respects (Nichol and Dhariwal, 2021; Song et al., 2021; Dhariwal and Nichol, 2022). For DepthGen, we adapt the _Efficient U-Net_ architecture that was developed for Imagen (Saharia et al., 2022). The Efficient U-Net architecture is more efficient that the U-Nets used in prior work, because it has fewer self-attention layers, fewer parameters and less computation at higher resolutions, along with other adjustments that make it well suited to training medium resolution diffusion models.
We make several minor changes to this architecture to adapt it for image-to-depth models. We drop the text cross-attention layers but keep the self-attention layer. Efficient U-Net has six input and three output channels, since the target is a RGB image (input consists of a 3-channel source RGB image and a 3-channel noisy target image concatenated along the channel dimension). For depth models, since we have a scalar output image, we modify the architecture to have four input channels and one output channel. Note that this means we need to reinitialize the input and output convolutional kernels before the supervised depth
pretraining stage.
**Resolution.** Our re-trained Palette model was trained for images at a resolution of \(256\times 256\). For training depth models we choose resolutions that are close to this while preserving the aspect ratios of the original depth training datasets. The indoor model is trained at \(320\times 240\). For Waymo we use \(384\times 256\) and for KITTI \(416\times 128\). The model does not contain learned positional embeddings so it can be easily pretrained and finetuned at different resolutions.
### Hyper-parameters
The self-supervised model is trained for 2.8M steps with an \(L_{2}\) loss and a mini-batch size of 512. Other hyper-parameters are similar to those in the original Palette paper.
The depth models are trained with \(L_{1}\) loss. We use a constant learning rate of 1e-4 during supervised depth pretraining but switch to a slightly lower learning rate of 3e-5 during fine-tuning which we find achieves slightly better results. We do learning rate warm-up over 10k steps for all models. All depth models are trained with a smaller mini-batch size of 64. The indoor depth model is trained on a mix of ScanNet and SceneNet RGBD for 2M steps and then fine-tuned on NYU for 50k steps. The outdoor depth model is trained on Waymo for 0.9M steps and fine-tuned on KITTI for 50k steps. Other details, like the optimizer and the use of EMA are similar to those outlined in (Saharia et al., 2022).
### Sampler
We use the DDPM ancestral sampler (Ho et al., 2020) with 128 denoising steps. Increasing the number of denoising steps further did not greatly improve performance. We have not yet explored progressive distillation (Salimans and Ho, 2022) for faster sampling. We believe the results on distillation for generative image models should transfer well to image-to-depth models, thereby, reducing the gap between the speed of diffusion sampling and single-step depth estimation models. We leave this exploration to future work.
### Evaluation metrics
We follow the standard evaluation protocol used in prior work (Li et al., 2022). For both the NYU depth v2 and KITTI datasets we report the absolute relative error (REL), root mean squared error (RMS) and accuracy metrics (\(\delta_{i}<1.25^{i}\) for \(i\in 1,2,3\)). For NYU we additionally report absolute error of log depths (\(log_{10}\)). For KITTI we additionally report the squared relative error (SQ-rel) and root mean squared error of log depths (RMS log).
### Results
Table 1 shows the results on NYU depth v2. We achieve a state-of-the-art absolute relative error of 0.074. Table 2 shows results on KITTI, where we perform competitively with prior work. We report results with averaging depth maps from one or more samples. Note that most prior
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & Architecture & \(\delta_{1}\uparrow\) & \(\delta_{2}\uparrow\) & \(\delta_{3}\uparrow\) & REL\(\downarrow\) & RMS\(\downarrow\) & \(log_{10}\downarrow\) \\ \hline DORN [1] & ResNet-101\({}^{\dagger}\) & 0.828 & 0.965 & 0.992 & 0.115 & 0.509 & 0.051 \\ VNL [2] & ResNeXt-101\({}^{\dagger}\) & 0.875 & 0.976 & 0.994 & 0.108 & 0.416 & 0.048 \\ BTS [3] & DenseNet-161\({}^{\dagger}\) & 0.885 & 0.978 & 0.994 & 0.110 & 0.392 & 0.047 \\ DAV [4] & DRN-D-22\({}^{\dagger}\) & 0.882 & 0.980 & 0.996 & 0.108 & 0.412 & – \\ TransDepth [5] & Res-50+ViT-B\({}^{\dagger}\) & 0.900 & 0.983 & 0.996 & 0.106 & 0.365 & 0.045 \\ DPT [6] & Res-50+ViT-B\({}^{\dagger\dagger}\) & 0.904 & 0.988 & 0.998 & 0.110 & 0.357 & 0.045 \\ AdaBins [7] & E-B5+Mini-ViT\({}^{\dagger}\) & 0.903 & 0.984 & _0.997_ & 0.103 & 0.364 & 0.044 \\ BinsFormer [8] & Swin-Large\({}^{\dagger}\) & 0.925 & 0.989 & _0.997_ & 0.094 & 0.330 & 0.040 \\ PixelFormer [9] & Swin-Large\({}^{\dagger}\) & 0.929 & _0.991_ & 0.998 & 0.090 & 0.322 & 0.039 \\ MIM [10] & SwinV2-L\({}^{\top}\) & 0.949 & **0.994** & **0.999** & 0.083 & 0.287 & _0.035_ \\ AiT-P [11]\({}^{*}\) & SwinV2-L\({}^{\top}\) & **0.953** & 0.993 & **0.999** & _0.076_ & **0.279** & 0.033 \\ \hline DepthGen (ours) & & & & & & & \\ samples=1 & Efficient U-Net\({}^{\top\ddagger}\) & 0.944 & 0.986 & 0.995 & 0.075 & 0.324 & **0.032** \\ samples=2 & Efficient U-Net\({}^{\top\ddagger}\) & 0.944 & 0.987 & 0.996 & **0.074** & 0.319 & **0.032** \\ samples=4 & Efficient U-Net\({}^{\top\ddagger}\) & _0.946_ & 0.987 & 0.996 & **0.074** & 0.315 & **0.032** \\ samples=8 & Efficient U-Net\({}^{\top\ddagger}\) & _0.946_ & 0.987 & 0.996 & **0.074** & _0.314_ & **0.032** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of performances on the NYU-Depth-v2 dataset. \(\top\) indicates methods that use unsupervised pretraining, \(\dagger\)indicates supervised pretraining and \(\ddagger\) indicates methods with supervised depth pretraining on auxiliary data. **Best** / second best / _third best_ results are bolded / underlined / italicized respectively. \(\downarrow\) denotes lower is better and \(\uparrow\) denotes higher is better. Baselines: [1] Fu et al. (2018), [2] Yin et al. (2019), [3] Lee et al. (2019), [4] Huynh et al. (2020), [5] Zhao et al. (2021), [6] Ranftl et al. (2021), [7] Bhat et al. (2021), [8] Li et al. (2022), [9] Agarwal and Arora (2022), [10] Xie et al. (2022), [11] Ning et al. (2023). \({}^{*}\) denotes concurrent work.
works report average over two samples obtained by left-right reflection of the input image.
### Ablations
We find that both pre-training and accounting for missing depth are crucial to model performance. Table 3 shows that both self-supervised pre-training and supervised depth pre-training are important, with supervised depth training having a larger impact, which is to be expected. Table 4 shows that depth infilling is extremely important for the outdoor KITTI dataset. It has less impact on NYU, which is understandable since KITTI has sparser depth maps. In the absence of filled depth maps, step-unrolled denoising diffusion dramatically improves results especially on KITTI. Even with depth infilling, SUD consistently improves performance for both indoor and outdoor datasets. Additionally, we ablate the choice of loss function in Table 5. We find that the \(L_{1}\) loss yields better performance than \(L_{2}\), likely because \(L_{1}\) is more robust to noise at larger depths. (See Appendix for metrics other than absolute relative error.)
Figure 3: Multimodal depth predictions on the KITTI val dataset.
Figure 2: Examples of multimodal predictions on the NYU Depth V2 val dataset. Rows 1-2 contain glass doors/windows where the model learns to predict the depth for either the glass surface or the surface behind it. Row 3 has a dark area next to the refrigerator for which the depth is unclear from RGB alone. In row 4 the model hallucinates the reflected door as a bath cabinet, which seems plausible from the RGB image.
### Novel View Synthesis
One advantage of diffusion models is the ease with which one can zero-shot impute one part of an image (or depth map) conditioned on the rest of the image (or depth map). Here, we leverage this to build a limited but effective text-to-3D scene generation pipeline. As depicted in Figure 5, we use the Imagen text-to-image model to generate an image, given text \(c\), to which we apply DepthGen (zero-shot) to sample a corresponding depth map. We then move the camera and, following (Liu et al., 2021), render the RGBD point cloud from a new camera pose. Of course this only provides RGB and depth values at a subset of pixels in the new frame since the fields of view are different. Fortunately, the missing pixels are easily inferred using diffusion models (i.e., the Imagen Editor (Wang et al., 2022) and DepthGen).
Let \(x_{a}\) and \(y_{a}\) be the RGB and depth values at pixel locations rendered from the new camera pose respectively, and let \(x_{b}\) and \(y_{b}\) correspond to lines of sight not visible in the original frame. We first infer the missing RGB values, i.e., \(p(x_{b}|x_{a},c)\), using the uncropping/inpainting capability of the Imagen Editor. We then use DepthGen to impute the missing depth values, i.e., sampling from \(p(y_{b}|y_{a},[x_{a},x_{b}])\). There are several effective solutions to imputation with diffusion models, including the replacement method in (Song et al., 2021), and the more sophisticated use of reconstruction guidance in (Ho et al., 2022). For simplicity we use the replacement method to sample the unknown depths \(y_{b}\) conditioned on existing depths \(y_{a}\) and the image \(x=[x_{a},x_{b}]\).
## 5 Conclusion
We propose a novel approach to monocular depth estimation using denoising diffusion models. We leverage self-supervised image-to-image pre-training, followed by subsequent training on supervised depth data to achieve SOTA results on challenging depth estimation benchmarks. We make several innovations that make it possible to effectively train diffusion models on imperfect training data that are commonplace for depth estimation. We demonstrate the multimodal capability of diffusion models to represent depth uncertainty. And we exploit the ease of imputation during iterative refinement in diffusion models to show how DepthGen can be used for zero-shot depth completion. In combination with text-to-image diffusion models, this enables a simple pipeline for novel view synthesis and text-to-3D.
Figure 4: Text to 3D samples. Given a text prompt, an image is first generated using Imagen (first row of first column), after which depth is estimated (second row of first column). Subsequently the camera is moved to reveal new parts of the scene, which are then infilled using an image completion model and DepthGen (which conditions on both the incomplete depth map and the filled image). At each step, newly generated RGBD points are added to a global point cloud which is visualized in the rightmost column. See 6 for more samples.
Figure 5: Pipeline for iteratively generating a 3D scene conditioned on text \(c=A\ bedroom\). See text for details. |
2307.00065 | Qualitative Prediction of Multi-Agent Spatial Interactions | Deploying service robots in our daily life, whether in restaurants,
warehouses or hospitals, calls for the need to reason on the interactions
happening in dense and dynamic scenes. In this paper, we present and benchmark
three new approaches to model and predict multi-agent interactions in dense
scenes, including the use of an intuitive qualitative representation. The
proposed solutions take into account static and dynamic context to predict
individual interactions. They exploit an input- and a temporal-attention
mechanism, and are tested on medium and long-term time horizons. The first two
approaches integrate different relations from the so-called Qualitative
Trajectory Calculus (QTC) within a state-of-the-art deep neural network to
create a symbol-driven neural architecture for predicting spatial interactions.
The third approach implements a purely data-driven network for motion
prediction, the output of which is post-processed to predict QTC spatial
interactions. Experimental results on a popular robot dataset of challenging
crowded scenarios show that the purely data-driven prediction approach
generally outperforms the other two. The three approaches were further
evaluated on a different but related human scenarios to assess their
generalisation capability. | Sariah Mghames, Luca Castri, Marc Hanheide, Nicola Bellotto | 2023-06-30T18:08:25Z | http://arxiv.org/abs/2307.00065v1 | # Qualitative Prediction of Multi-Agent Spatial Interactions
###### Abstract
Deploying service robots in our daily life, whether in restaurants, warehouses or hospitals, calls for the need to reason on the interactions happening in dense and dynamic scenes. In this paper, we present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes, including the use of an intuitive qualitative representation. The proposed solutions take into account static and dynamic context to predict individual interactions. They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons. The first two approaches integrate different relations from the so-called Qualitative Trajectory Calculus (QTC) within a state-of-the-art deep neural network to create a symbol-driven neural architecture for predicting spatial interactions. The third approach implements a purely data-driven network for motion prediction, the output of which is post-processed to predict QTC spatial interactions. Experimental results on a popular robot dataset of challenging crowded scenarios show that the purely data-driven prediction approach generally outperforms the other two. The three approaches were further evaluated on a different but related human scenarios to assess their generalisation capability.
## I Introduction
While service robots are increasingly being deployed in our domestic, healthcare, warehouse, and transportation environments, modeling and predicting the _interactions_ of different agents given their context (i.e. nearby static and dynamic objects, including people) are important requirements for effective human-robot co-existence and intent communication. They can help robots know when, where and how to intervene on the environment, in addition to navigate it safely which is usually accomplished with the classic human motion prediction paradigm. For example, an assistive robot patrolling someone's home or a crowded hospital needs to reason continuously on the relative state of nearby people for approaching and communicating with them, ideally predicting future human (spatial) interactions to optimize its own decision-making.
Differently from typical human motion prediction, which is mostly concerned with navigation safety, dealing with multi-agent _interactions_ presents two advantages: from a "social" navigation point of view, interaction prediction facilitates human-robot motion coordination and intent communication; from an "explainable" decision-making point of view, interaction prediction makes an individual's motion behaviour more meaningful in many social contexts. For example, by detecting a group meeting in an office (as in Fig.1), and predicting it will last for a while, the robot does not disturb the people involved. On the other hand, if the robot predicts that an elderly patient is trying to approach it to ask for help, it can use this information to update its initial plan and prioritize the responsiveness to the human's intent.
An intuitive approach for multi-agents spatial interaction representations is the qualitative one. In the 2D and 3D spatial domains (e.g. navigation and manipulation, respectively), a qualitative interaction between pairs of agents or body points can be captured by some symbolic representation. One way to model qualitative spatial interactions is by using a qualitative trajectory calculus (QTC) [1]. QTC-based models of moving agent pairs can be described by different combinations of QTC symbols that represent spatial relations, e.g. relative distance (moving towards/away), velocity (faster/slower), or orientation (to the left/right) with respect to the central axis joining both agents. QTC-based interaction modeling was presented in [2, 3, 4] for modeling 2D human-robot spatial interactions, with further application into human-aware robot navigation [5, 6]. Differently from the focus of this paper, the authors in [6] used a Bayesian temporal model to study the interaction of a single pair of agents (human-robot), without accounting for the dynamic and/or static context, which limits the prediction performance. An alternative way for representing spatial interactions in a multi-agent scenario is by quantitatively merging all agents in the context to drive a robot navigation stack [7, 8, 9]. These works though cannot
Fig. 1: Modeling and predicting multi-agent interactions can improve the decision-making process of social and service robots for helping in heavy and/or health-related tasks, anticipating elders assistance needs, cafe/restaurant table serving, office duties, etc., while avoiding conversational groups or ordering queues (unless called for it).
infer the implicit spatials intent of the agents.
To the best of our knowledge, there is a gap in the literature regarding the prediction of qualitative (i.e symbolic) and/or quantitative (i.e metrical) interactions between multi-agent entities (e.g. human-human, human-robot, human-object, robot-object) given their nearby dynamic and/or static context, which was only partly addressed in [6] for a single human-robot pair. Further investigation in more complex scenarios is therefore necessary to enhance future robot reasoning, mutual intent communication, and reactive or predictive planning.
The contribution of this paper is therefore two-fold: (i) addressing the prediction of Multi-Agent Spatial Interactions (MASI) with dynamic and static context-awareness by implementing three new approaches for medium and long-term interactions predictions, including a QTC-based neural network representation; (ii) experimentally evaluating the proposed frameworks on different scenarios with multiple humans/objects to assess the prediction performance, even under domain-shift conditions.
The remainder of the paper is as follows: Sec. II presents an overview of the related work; Sec. III explains the approach adopted to model and predict spatial interactions in dense scenes; Sec. IV illustrates and discusses the results from experiments conducted on a public dataset for social robot navigation; finally, Sec. V concludes by summarising the main outcomes and suggesting future research work.
## II Related Work
**Human-human interactions modeling:** Two methods have been presented in the literature for interactions modeling with nearby dynamic agents: (i) one-to-one modeling, and (ii) crowd modeling. A one-to-one interaction modeling between human-robot pair was presented in [4] in the form of qualitative representation by encoding a sequence of QTC states in a Markov Chain representation. Human-human interactions modeling was also addressed in [7] for social navigation, where interactions with neighbors are embedded in a multi-layer perceptron by using local maps centered at each person. On the other hand, crowd modeling was discussed in [9], where the major existing hybrid techniques were surveyed. Hybrid crowd techniques are brought forward to overcome some limitations of classical methods (e.g high computation cost). For crowd analysis, F-formations modeling and detection has been addressed recently in [10], where the authors deconstructed a social scene into pairwise data points, then they used feature-based classification to distinguish F-formations. In this work, we do not limit our approach to F-formations only. Hence, we build on previous works from HRSI modeling for single pair of agents [6], taking inspiration from the hybrid approaches of crowd modeling, in order to predict multi-agent interactions in dense scenes.
**Context-aware human motion prediction:** While the problem of context-aware human motion prediction has been extensively addressed in the literature, to the best of our knowledge, the problem of context-aware multi-agent interactions prediction in dense environments (as the social ones) has been mostly neglected. The state of the art works vary based on no context-awareness, static-only context, dynamic-only context [11, 12, 13], static and dynamic context [14, 15].
Architectures such as Social-LSTM [11] and SGAN [12] capture spatial interactions only. Also, the authors adopt an LSTM encoding for each agent that cannot account for static objects embedding in the neighborhood. The Stgat architecture in [13] accounts for dynamic agents only and the use of dual LSTM modules in the encoder limits the ease of direct integration of static objects representations. As per [14], the DSCMP architecture outperforms S-LSTM, SGAN and Stgat in terms of parameters and time consumption. In DSCMP, both static and dynamic contexts are incorporated together with spatial and temporal dependencies between agents. In that work, the static context is embedded in a latent space through a convolutional semantic map of the whole scene. The work in [15], instead, addresses the problem of action prediction together with motion prediction, by using person-whole scene interaction embedding (leveraging the semantic scene convolutional map) together with explicitly encoding the interaction between person and surrounding objects (person, objects) into a geometric relationship.
In this paper, we take inspiration from [14] and [15] to develop a dynamic and static context-aware predictor of spatial interactions, but we limit our current study to one data type entry to the network architecture used for experimentation. We choose raw coordinates as the sole upstream data type, commonly used to represent dynamic agents motion, leaving the exploitation of semantic map representation of the scene (fully or partially) for our future work. Hence, we embed only the raw coordinates of key features (static objects of use) that represent the social scene, and that's because in social scenes and according to [16], humans interact not only with one another, but also with machines that are meaningful.
## III MASI Prediction Framework
### _Problem Statement_
While metrical motion prediction of nearby agents allows robots to replan locally their target destination for safe navigation, it doesn't provide the robot with enough intelligence to reason on the implicit intent one may convey in his motion (e.g. a person may speed up at a room entrance, as a patient's room, to convey to the robot an urgent need to enter first), the problem of which can be dealt by embedding the robot with a reasoning (modeling and predicting) paradigm on multi-agent spatial interactions, allowing it to take reactive or predictive optimal decisions by intervening or not on its surrounding.
### _Qualitative Spatial Interactions_
A qualitative spatial interaction is represented by a vector of \(m\) QTC symbols (\(q_{i}\), \(i\in\mathbb{Z}\)) in the domain \(D=\{-,0,+\}\)[1]. Among the different types of QTC
representations, we exploit the double-cross \(QTC_{C}\) which employs away/towards, left/right, relative speed, and relative angle dichotomies, as illustrated in Fig. 2. Two types of \(QTC_{C}\) were proposed in the literature, the \(QTC_{C_{1}}\) with four symbols \(\{q_{i},i=1..4\}\), and the \(QTC_{C_{2}}\) with six symbols \(\{q_{i},i=1..6\}\). The symbols \(q_{1}\) and \(q_{2}\) represent the towards/away (relative) motion between a pair of agents; \(q_{3}\) and \(q_{4}\) represent the left/right relation; \(q_{5}\) indicates the relative speed, faster or slower; finally, \(q_{6}\) depends on the (absolute) angle with respect to the reference line joining a pair of agents. The qualitative interaction between the time series of two moving points, \(P_{r}\) and \(P_{h}\), is expressed by the following \(q_{i}\) symbols:
\[(q_{1}) -:d(P_{r}|r^{-},P_{h}|t)>d(P_{r}|r,P_{h}|t)\] \[+:d(P_{r}|r^{-},P_{h}|t)<d(P_{r}|r,P_{h}|t);\quad 0\:\text{all other cases}\] \[(q_{3}) -:\|P_{r}^{-}P_{r}^{+}P_{h}^{+}P_{h}^{-}P_{h}^{+}|<0\] \[+:\|P_{r}^{-}P_{r}^{+}P_{h}^{+}P_{h}^{+}P_{h}^{+}|>0;\quad 0\: \text{all other cases}\] \[(q_{5}) -:\|P_{r}^{-}\|<\|P_{h}^{-}\|\] \[+:\|P_{r}^{-}\|>\|P_{h}^{-}\|;\quad 0\:\text{all other cases}\] \[(q_{6}) -:\theta(\vec{v}_{r}^{\prime},P_{h}\vec{P}_{h}^{\prime})<\theta (\vec{v}_{h}^{\prime},P_{h}\vec{P}_{h}^{\prime})\] \[+:\theta(\vec{v}_{r}^{\prime},P_{h}\vec{P}_{h}^{\prime})>\theta( \vec{v}_{h}^{\prime},P_{h}\vec{P}_{h}^{\prime});\quad 0\:\text{all other cases}\]
(\(q_{2}\)) and (\(q_{4}\)) are similar to (\(q_{1}\)) and (\(q_{3}\)), respectively, but swapping \(P_{r}\) and \(P_{h}\). \(d(.)\) is the euclidean distance between two positions, \(\theta(.)\) is the absolute angle between two vectors, and \(t^{-}\) denotes a single previous time step.
In this paper, we propose a framework (\(F\)) for spatial interaction prediction and compare three possible implementations: two symbol-driven neural approaches, denoted by \(F^{QTC-4}\) and \(F^{QTC-6}\), where both input and output of the neural network are QTC symbols; a third approach, denoted by \(F^{ts}\), where the inputs are raw trajectories and the outputs are QTC symbols. In particular, \(F^{QTC-4}\) and \(F^{QTC-6}\) exploit \(QTC_{C_{1}}\) and \(QTC_{C_{2}}\), respectively, to directly predict QTC vectors with a time horizon \(T_{f}\), while \(F^{ts}\) extracts QTC vectors from the coordinates generated by a purely data-driven motion prediction architecture over \(T_{f}\). The main difference between the two symbol-driven frameworks is that \(F^{QTC-4}\) assigns a greater importance to the prediction of left/right and towards/away dichotomies, neglecting the relative velocity and angle embedded in \(F^{QTC-6}\).
### _Network Architecture_
In order to narrow down the study, we limit the network upstream input to raw coordinates of body points (i.e. dynamic agents, static key objects in the environment), which will then be converted to QTC input for the evaluation of \(F^{QTC-4}\) and \(F^{QTC-6}\) frameworks. Among several network architectures developed in the literature for human motion prediction, some give no consideration to the static context [12], others embed the static context as semantic map input to the network [14, 15]. Though these architectures can serve as a tool for our current study, we take advantage in this paper from the network architecture in [17] as starting point to implement our interaction prediction framework (F) for the prediction of qualitative spatial interactions. The architecture as in [17] takes as input only time series of raw coordinates which get processed through an embedding, attention, and LSTM layers respectively, as can be seen from the encoder and decoder in Fig. 3. This architecture alleviates the need for a separate network for static context embedding, as a CNN features extractor from the semantic scene image. The architecture in [17] allows also the incorporation of both spatial and temporal dependencies of interactions. It is worth to stress on the fact that other architectures from the state of the art in context-aware human motion prediction can serve the purpose of this benchmark study, and this will be targeted in our future works for performance generalisation. In order to implement \(F^{QTC-4}\) and \(F^{QTC-6}\), we modified the original architecture to deal with time-series of categorical data, representing symbolic knowledge of the spatial interactions between pairs of agents. We also extended the prediction horizon to medium (i.e. 48 time steps, or 3.2\(s\)) and longer (i.e. 72 time steps, or 4.8\(s\)) time horizons. The parameters for medium and long time horizon prediction were chosen based on relevant literature of human motion prediction [12, 14]. The input attention encoder of the network in Fig. 3 consists of an input attention layer (I-Attention) which weighs \(n^{*}\) spatial interactions in a radial cluster. The encoder is then followed by a decoder with a temporal attention layer (T-Attention), capturing the temporal dependencies in multi-agent interactions. The network encodes \(n^{*}\) input series (denoted by \(\mathbf{x}\)), each of length \(T_{h}\), and decodes \(n^{*}\) output labels (denoted by \(\mathbf{y}\)), each of length \(T_{f}\), where \(T_{f}\) is the predictive time horizon and \(T_{h}\) is the time history used for the temporal attention purpose. For our categorical data, we minimize a sparse (categorical) cross-entropy loss function between the true and predicted QTC vector indices, extracted from a dictionary of 444 possible \(QTC_{C_{2}}\) vectors for \(F^{QTC-6}\), and 82 possible \(QTC_{C_{1}}\) vectors for \(F^{QTC-4}\). Both the dictionaries include an additional index of "impossible" QTC vector, where all the QTC relations \(q_{i}\) assume a value \(\notin D\), chosen to be 10. The impossible QTC vector accounts for the case of an agent leaving the
Fig. 2: A case of \(QTC_{C_{1}}\) representation of interactions between three body points \(P_{h1}\), \(P_{h2}\), and \(P_{r}\). For example, the QTC interaction represented by \((-,-,+,-)\) implies that agent \(h_{1}\) and robot ’r’ are moving towards each other, ’r’ moves to \(h_{1}\) right side, while \(h_{1}\) moves to ’r’ left side.
cluster at time \(t\), or for complementary "fake" interactions added to each cluster to make it of fixed \(n^{*}\) size. The reader can refer to [17] for a detailed explanation of the network components, which are schematically illustrated in Fig. 3.
### _Data Processing_
Social scenarios have often an unpredictable number of people entering and leaving the environment, possibly leading to a combinatorial explosion in the input size of the predictive model and in its number of training parameters. In order to approach the problem of reasoning in socially crowded environments, we implement a crowd clustering approach for local interactions prediction. The advantage of this approach is that all the clusters have a fixed micro-size (i.e maximum number of agents entering the cluster at any given time) and it accounts for the agents entering and leaving the cluster. We applied the radial clustering approach on the JackRabbot [1] open-source dataset, which provides multi-sensor data of human behaviours from a mobile robot in populated indoor and outdoor environments (Fig. 4). We make use of the open-source annotated 3D point clouds, provided as metric coordinates of humans (dynamic context) bounding boxes centroid, and extracted from the upper velodyne sensor, as raw data and ground truth to our network architecture. The raw data are further processed to extract QTC representations of a spatial interaction between each pair of agents, whose dictionary index is then used as ground truth output for \(F^{QTC-4}\) and \(F^{QTC-6}\) approaches. In parallel, the raw metric data are directly used as ground truth labels for the \(F^{ts}\) approach. The environments considered in JRDB are fairly crowded. Among them, we selected a cafe shop (_bytes-cafe-2019-02-07.0_) for comparing the proposed prediction approaches, and two poster session scenarios (_packard-poster-session-2019-03-20.2_, denoted PS-2, and _packard-poster-session-2019-03-20.1_, denoted PS-1) for testing the framework on a domain-shift situation. In the cafe scenario, the static context includes objects such as bar order and check-out points, exit door, drinking water station, as illustrated in Fig. 4 (top). These objects were manually selected based on our careful investigation to identify the most common ones used by people in the scenario, although in the future we plan to learn them automatically in order to adapt to different environments. The spatial coordinates of the selected objects, extracted from the scene point cloud, are incorporated in the network architecture as any other dynamic agent.
For each agent \(i\) in a given scene, we generate a cluster with a fixed interaction radius \(R_{1}=1.2m\). The latter is selected based on the proxemics' literature [18], where the social distance for interactions among acquaintances is indeed between \(1.2m\) (short phase) and \(3.7\)m (long phase). Each cluster includes \(n\) input series, with \(n\) being the maximum number of agents entering the cluster of agent \(i\) in a time interval \(T\). Each input series is defined as a series of spatial interactions between agents \(r\) and \(h\), where \(h\) is every other dynamic agent/static object in the cluster of \(r\). The maximum number of input series among all clusters, \(n^{*}\), is fixed for practical (training) purposes. Each cluster is then post-processed to include \((n^{*}-n)\) input series with complementary "fake" values.
Spatial interactions are formulated in terms of categorical (i.e. QTC) data, hence two dictionaries of all possible qualitative interactions in 2D are generated based on \(QTC_{C_{2}}\) and \(QTC_{C_{1}}\) for the approaches \(F^{QTC-6}\) and \(F^{QTC-4}\), respectively. The input to our network are now \(n^{*}\) series of indices over the time history \(T_{h}\). For both the cafe and the poster sessions scenarios, we evaluated the prediction performance for a medium (\(T_{f}=3.2s\)) and a longer term (\(T_{f}=4.8s\)) horizons.
## IV Experiments
The three proposed framework configurations implement the same architecture as in Fig. 3 but they were trained with different losses, since the input data is different. \(F^{QTC-4}\) and \(F^{QTC-6}\) were trained by minimising a categorical cross-entropy loss function over 120 epochs using Adam optimiser, and with \(T_{h}=10\) time steps (i.e \(0.67s\), much less than other works as in [12, 13]), a batch size \(B=10\), as hyper-parameters, while \(F^{ts}\) was trained using the root
Fig. 4: JackRabbot dataset scenes: (top) _bytes-cafe-2019-02-07.0_, (bottom) _packard-poster-session-2019-03-20.2_.
Fig. 3: An input-temporal attention mechanism for predicting spatial interactions of multi-dimensional input categorical (red) and metrical (yellow) time series extracted from dense scenes: application to JackRabbot dataset. The diagram is extended from [17]. **x** is the input driving vector, **y** is the label vector.
mean square error loss function (RMSE) over 80 epochs using Adam optimiser, and with \(T_{h}=5\) time steps and \(B=5\). Other common hyper-parameters between the 3 network configurations are, hidden states \(h=256\) for both the encoder and decoder and a learning rate \(l_{r}=0.001\). The hyper-parameters were tuned to reach a good validation loss. The input consists of 63,566 samples for the cafe scene with \(F^{QTC-4}\) and \(F^{QTC-6}\), and \(46,548\) with \(F^{ts}\); \(109,040\) samples for PS-2 with \(F^{QTC-4}\) and \(F^{QTC-6}\), and \(110,889\) with \(F^{ts}\), whereas PS-1 has \(69,126\) samples in the three frameworks. The size of the input dataset is the same for both medium and longer term \(T_{f}\), and it is divided into 80% training, 10% validation, and 10% testing sets. All the three frameworks were trained on a computing system consisting of Intel(r) Core(tm) i7-6850K processor @3.6_GHz_ and NVIDIA GeForce GTX 1080 Ti 11GB GPU.
Since the three proposed approaches for spatial interaction prediction are trained with different loss functions, in order to compare their performance we use the so-called "conceptual QTC distance" [1] defined as a measure for the closeness of QTC relations. Specifically, a conceptual distance between 0 and another symbol, \(\{+\}\) or \(\{-\}\), is assumed to be "\(+1\)", while the conceptual distance between \(\{+\}\) and \(\{-\}\) is "\(+2\)". The overall conceptual distance between two QTC vectors is calculated by summing the conceptual distance over all their relation symbols. For example, suppose \(QTC^{\prime}\) and \(QTC^{p}\) are two QTC vectors, where \(t\) and \(p\) refer to the true and predicted QTC vectors, respectively. Then, the conceptual QTC distance is calculated as:
\[\mathbf{d}_{QTC}=\mathbf{d}_{QTC^{\prime}}^{QTC^{p}}=\sum_{q_{i}}\mid q_{i}^{ QTC}-q_{i}^{QTC^{p}}\mid, \tag{1}\]
where \(q_{i}\) is one of the symbols defined in Sec. III-B.
### _Testing Evaluation_
In Table I, we report the results on the 10% test set and cluster radius of \(R_{1}=1.2\)m of the cafe scene in terms of normalised mean (\(\mu\)) and standard deviation (\(\sigma\)) of \(d_{QTC}\). The normalisation is done over the labels, \(T_{f}\), and \(B\). We note that the range of \(d_{QTC}\) is approximately \(\mathcal{R}=\{0-40\}\) for \(F^{QTC-4}\), and \(\{0-60\}\) for \(F^{QTC-6}\). The maximum value of \(\mathcal{R}\) accounts for the inability of QTC to represent missing agents in the radial cluster. On the test set, \(F^{QTC-6}\) significantly outperforms \(F^{QTC-4}\) over medium and longer time horizons, however \(F^{ts}\) have the best performance among the three configurations, over both time horizons. Also, \(F^{ts}\) (motion prediction) with \(QTC_{\text{$C_{1}$}}\) post-processing (denoted \(F^{ts,1}\)) for interaction prediction or analysis performs better on the medium term while \(F^{ts}\) with \(QTC_{\text{$C_{2}$}}\) (\(F^{ts,2}\)) performs best on the longer term. Overall, \(F^{ts,1}\) and \(F^{ts,2}\) outperform \(F^{QTC-6}\), with \(F^{ts,1}\) having 73.05% and 81.27% reduction on \(\mu(d_{QTC})\), and 93.8% and 96.68% reduction on \(\sigma(d_{QTC})\), for over the medium and long term predictions, respectively. From these observations we can conclude that predictive networks perform better on non-symbolic data compared to their symbolic counterpart when applied to crowded human environments. We report \(F^{ts,1}\) training and validation time of 6.6_hrs_ and 8.3_hrs_, while the evaluation time is 5.8_ms_ and 9.3_ms_ over 3.2\(s\) and 4.8\(s\) prediction horizons, respectively. In order to evaluate the effect of cluster radius selection, Table I also shows the results when cluster radius is \(R_{2}\) = 3.7m. In this case, with a larger cluster, hence with more context accounted for, \(F^{ts,1}\) outperforms all other configurations, on both the medium and longer horizons, it also outperforms \(F^{ts,2}\) performance over \(T_{f}=4.8\)_s_ and when \(R_{1}\) is accounted for. We can infer that with larger cluster radius, more context is accounted for to help in long term prediction, and hence, less interaction symbols are required to accurately represent the true interactions between multi-agents.
### _Domain-Shift (DS) Evaluation_
In order to further assess the generalisation capabilities of the three approaches, we re-trained and compared the results on different but related scenarios. Unfortunately, another cafe scene in JRDB (_forbes-cafe-2019-01-22.0_) lacks the necessary information to transform local coordinates from a mobile robot into a fixed reference frame for further data processing. Therefore, without loss of generality, we chose another crowded environment (poster session PS-2, as in Fig. 4-bottom) to re-train our network configurations with \(R_{1}\)=1.2m, and tested the latter on a different but related scenario (poster session PS-1). The performance on the testing set (i.e. 10% of PS-2) is reported in Table II (first column). We notice that \(F^{ts,1}\) outperforms \(F^{QTC-4}\) and \(F^{QTC-6}\) on both medium and long term predictions with 72.47% and 85.8% reduction on \(\mu(d_{QTC})\), and 93.9% and 94.48% reduction on \(\sigma(d_{QTC})\), for the 3.2 and 4.8s horizons, respectively. We note that, even within the same network configuration \(F^{ts,1}\) outperformed \(F^{ts,2}\). When looking at the transfer domain PS-1 in Table II (second column), on the 100% dataset, all the configura
\begin{table}
\begin{tabular}{c|c|c||c|c} & \multicolumn{4}{c}{**Cafe**} \\ \hline \hline & \(\mu^{109-R_{1}}\) & \(\sigma^{109-R_{1}}\) & \(\mu^{109-R_{2}}\) & \(\sigma^{109-R_{2}}\) \\ \hline \hline \(\mathbf{F}^{QTC-6}\) (3.2s) & 1.772 & 3.568 & 3.064 & 3.851 \\ \hline \(\mathbf{F}^{QTC-4}\) (3.2s) & 7.545 & 4.067 & 3 & 3.857 \\ \hline \(\mathbf{F}^{ts,1}\) (3.2s) & **0.464** & **0.22** & **0.32** & **0.16** \\ \hline \(\mathbf{F}^{ts,2}\) (3.2s) & 0.68 & 0.166 & 0.638 & 0.11 \\ \hline \(\mathbf{F}^{QTC-6}\) (4.8s) & 3.44 & 4.4 & 3.46 & 4 \\ \hline \(\mathbf{F}^{QTC-4}\) (4.8s) & 7.61 & 4.057 & 3.8 & 4.18 \\ \hline \(\mathbf{F}^{ts,1}\) (4.8s) & 3 & 1.254 & **0.25** & **0.18** \\ \hline \(\mathbf{F}^{ts,2}\) (4.8s) & **0.644** & **0.146** & 0.55 & 0.13 \\ \hline \end{tabular}
\end{table} TABLE I: Performance comparison between the QTC prediction approaches \(F^{QTC-4}\) and \(F^{QTC-6}\), and the motion prediction-based QTC analysis framework \(F^{ts}\) evaluated on \(QTC_{\text{$C_{1}$}}\) (\(F^{ts,1}\)) and \(QTC_{\text{$C_{2}$}}\) (\(F^{ts,2}\)), in the cafe scene of JRDB and over \(T_{f}=3.2\)s and 4.8\(s\) prediction horizons. All measures are unitless. \(\mu\) and \(\sigma\) are the normalised mean and standard deviation of the conceptual distance (\(d_{QTC}\)) measure over the test set. \(R_{1}\) and \(R_{2}\) correspond to cluster radius 1.2m and 3.7m, respectively. The best performance is highlighted in bold.
tions succeeded in generalising to PS-1 on the medium and longer terms except \(F^{ts,1}\) and \(F^{ts,2}\) who generalised well only on the medium term. Nevertheless, \(F^{ts,1}\) keeps holding the best performance overall when looking only at PS-1.
In summary, we can infer that, \(F^{ts,1}\) is the best framework for developing qualitative predictive solutions to embed a social autonomous system with additional intelligent capabilities as inferring on implicit intent communication and/or predict a need from the surrounding agents. A typical real-world scenario can be a robot patrolling an elderly home care center and instantly inferring on an elder approaching it for requesting assistance in taking a treatment (e.g. bringing water, pills). \(F^{ts,1}\) shows lowest mean and standard deviation loss, over short and longer horizons, and among different cluster radius. It also transfers to other domains with a 12.2% decrease and 17.8% increase in mean loss, over 3.2\(s\) and 4.8\(s\), respectively.
## V Conclusion
In this work, we presented and compared three approaches for multi-agent prediction of qualitative interactions in dense social scenes, combining a symbolic motion representation with an input/temporal-attention network architecture. We implemented a radial clustering approach to address mainly the notion of social proximity, and formulated spatial interactions in terms of a qualitative trajectory calculus (QTC). We compared two symbol-driven neural networks for QTC prediction, \(F^{QTC-4}\) and \(F^{QTC-6}\), with a third purely data-driven approach, \(F^{ts}\), based on plain coordinates, and evaluated them over two fixed-time horizons. We showed that the latter solution outperforms the previous two, specifically when post-processed for a small number of QTC symbols (\(F^{ts,1}\)), and that it performs best in the domain-shift scenario. Our future work will be devoted to the exploitation of this prediction framework for effective human-robot spatial interactions in social navigation applications, including real-world environments such as warehouses and university premises. In addition, we will further improve our models to select and integrate learnable key features of the environment, whether static or dynamic, which could have some causal influence on the aforementioned interaction processes.
## Acknowledgement
The authors would like to thank Francesco Castelli for his support in designing the problem approach.
|
2307.16607 | $OIDC^2$: Open Identity Certification with OpenID Connect | OpenID Connect (OIDC) is a widely used authentication standard for the Web.
In this work, we define a new Identity Certification Token (ICT) for OIDC. An
ICT can be thought of as a JSON-based, short-lived user certificate for
end-to-end user authentication without the need for cumbersome key management.
A user can request an ICT from his OpenID Provider (OP) and use it to prove his
identity to other users or services that trust the OP. We call this approach
$OIDC^2$ and compare it to other well-known end-to-end authentication methods.
Unlike certificates, $OIDC^2$ does not require installation and can be easily
used on multiple devices, making it more user-friendly. We outline protocols
for implementing $OIDC^2$ based on existing standards. We discuss the trust
relationship between entities involved in $OIDC^2$, propose a classification of
OPs' trust level, and propose authentication with multiple ICTs from different
OPs. We explain how different applications such as videoconferencing, instant
messaging, and email can benefit from ICTs for end-to-end authentication and
recommend validity periods for ICTs. To test $OIDC^2$, we provide a simple
extension to existing OIDC server software and evaluate its performance. | Jonas Primbs, Michael Menth | 2023-07-31T12:28:17Z | http://arxiv.org/abs/2307.16607v2 | # OIDC\({}^{\bf 2}\): Open Identity Certification with OpenID Connect
###### Abstract
OpenID Connect (OIDC) is a widely used authentication standard for the Web. In this work, we define a new Identity Certification Token (ICT) for OIDC. An ICT can be thought of as a JSON-based, short-lived user certificate for end-to-end user authentication without the need for cumbersome key management. A user can request an ICT from his OpenID Provider (OP) and use it to prove his identity to other users or services that trust the OP. We call this approach OIDC\({}^{\bf 2}\) and compare it to other well-known end-to-end authentication methods. Unlike certificates, OIDC\({}^{\bf 2}\) does not require installation and can be easily used on multiple devices, making it more user-friendly. We outline protocols for implementing OIDC\({}^{\bf 2}\) based on existing standards. We discuss the trust relationship between entities involved in OIDC\({}^{\bf 2}\), propose a classification of OPs' trust level, and propose authentication with multiple ICTs from different OPs. We explain how different applications such as videoconferencing, instant messaging, and email can benefit from ICTs for end-to-end authentication and recommend validity periods for ICTs. To test OIDC\({}^{\bf 2}\), we provide a simple extension to existing OIDC server software and evaluate its performance.
## 1 Introduction
In most communication services, users identify each other through account profiles in which they provide their own identity information. To make these profiles more trustworthy, social network operators such as Meta and Twitter offer identity verification services for an additional fee that can only be used within their ecosystem. However, identity verification is often a cumbersome process that users may not want to repeat for each of their service platforms.
In addition, users must still trust the service provider to sufficiently verify identities and not impersonate them. End-to-end user authentication mechanisms attempt to solve this problem, but they often lack adoption due to poor usability. Therefore, reusing an account for a verified identity would be desirable. With modern single sign-on (SSO) services, users can reuse their existing accounts to log in to other services. The OpenID Connect (OIDC) protocol, which is based on the OAuth 2.0 authorization framework, is widely used for this purpose. However, OIDC is designed for user-to-service authentication and does not address the purpose of end-to-end user authentication.
In this paper, we define a new Identity Certification Token (ICT) for OIDC. It is similar to the ID Token which holds identity claims about a user, but also contains a verified public key of the user's client. As such, it can be thought of as a JSON-based, short-lived user certificate without the need for a revocation mechanism. The use of an ICT differs significantly from the use of an ID Token. A user requests an ICT from his OpenID Provider (OP) and presents it to another user's client to authenticate himself. If the other user trusts the issuing OP, his client verifies the integrity and validity of the ICT and authenticates the user using his client's public key contained in the ICT. As the OP certifies the identity of the user, we call this concept Open Identity Certification with OpenID Connect (OIDC\({}^{\mathbf{2}}\)). It facilitates mutual authentication of users if they trust each other's OP.
While most OPs have a rather superficial identity verification process for their accounts, some practice a more thorough verification. In particular, new players such as banks and government institutions that perform rigorous identity verification for their accounts are becoming OPs. With OIDC\({}^{\mathbf{2}}\), unknown users can be reliably authenticated if they have an OIDC account at a trusted OP. Some services already provide strong user authentication, but these methods are difficult to use. Many instant messaging services support the exchange of public keys between users when they meet in person. Public key infrastructures (PKIs) require certificate management by users and reliable revocation list checking. PGP or S/MIME have long been proposed for email authentication, but are rarely used [28]. Self-Sovereign Identity (SSI) technology is currently emerging and solves this problem with device-specific long-term keys in a wallet app. However, this requires not only revocation mechanisms, but also recovery mechanisms in case the phone with the wallet app is lost or stolen. OIDC\({}^{\mathbf{2}}\) provides a more user-friendly alternative for end-to-end authentication. The ICT is short-lived, eliminating the need for cumbersome key revocation mechanisms, which improves security. OIDC\({}^{\mathbf{2}}\) avoids complex key management across devices by simply requesting a new ICT from the OP whenever needed. Using trusted OPs that verify the identity of their users also eliminates the need for face-to-face key exchange.
Thus, a trusted OP can be compared with a trusted certification authority in a PKI or a trusted issuer in the SSI context. However, OIDC\({}^{2}\) is only a lightweight extension for end-to-end authentication with existing OIDC accounts. It is not intended to replace PKIs or SSIs.
The paper is structured as follows. In Section 2, we revisit basics of OAuth 2.0 and OIDC, and in Section 3, we review related authentication technologies. Section 4 introduces the concept of OIDC\({}^{2}\) and proposes the extension to the OIDC protocol. Trust relationships in OIDC\({}^{2}\), a classification of OPs, authentication with multiple ICTs, and validity periods of ICTs are discussed in Section 5. In Section 6, we explain how OIDC\({}^{2}\) can be applied to video conferencing, instant messaging, and email. To test OIDC\({}^{2}\), we provide a simple extension to the OIDC server software in Section 7, which we evaluate in Section 8. Section 9 concludes our findings.
## 2 Introduction to OAuth 2.0 and OIDC
We introduce basics of OAuth 2.0 and OIDC, as they are the underlying technologies for OIDC\({}^{2}\). We discuss their trust relationship and explain how they facilitate single sign-on (SSO). In the following, we capitalize technical terms from the official OAuth 2.0 and OIDC terminology. For ease of understanding, we omit non-essential steps in the protocols and refer to the authoritative standards for details.
### OAuth 2.0
The OAuth 2.0 authorization framework, as defined in RFC 6749 [11], is based on HTTP (RFC 7231 [9]) and the JavaScript Object Notation (JSON) RFC 8259 [3]. It allows a user to grant his Client scoped access to his resources on a server, e.g., to only read emails. A Client can be a web application, or a native email client application. In OAuth 2.0, this server is called Resource Server (RS) because it protects the user's Protected Resources (PR); the user is called the Resource Owner (RO).
Without OAuth 2.0, the RO would leave his credentials with his Client to log in directly to the RS. With OAuth 2.0, the RO logs in to an Authorization Server (AS) and tells the AS to authorize the Client to access a scoped set of PRs. To do this, the AS issues an Access Token (AT) to the Client. This AT allows the Client to access the PRs on the RS. In this way, OAuth 2.0 improves the security by granting Clients only a scoped access to the user's account without exposing the user's credentials to any of the Clients.
Figure (a)a shows a simplified authorization request where the RO authorizes his Client to read email.
First, the Client requests access to the Scope read_email, which authorizes read-only access to the RO's emails (1). Then, the AS authenticates the RO (2) and the RO authorizes the Client for the requested Scope (3). Finally, the AS issues the AT and optionally a Refresh Token (RT) (4). This AT contains the authorized Scopes with a short validity period. It is signed with the AS's private key \(K^{-}_{AS}\). The RT is like a revocable session ID that authorizes the Client to refresh an expired AT without further user interaction.
Figure (b)b describes a Resource Request where the Client uses the AT to access PRs on the RS. First, the Client requests the PRs and provides the AT to prove authorization (1). Then, the RS verifies the AT for a sufficient Scope, its expiration date, and the validity of its signature with the AS's public key \(K^{+}_{AS}\) (2). Finally, the RS responds with the PRs (3).
### OpenID Connect (OIDC)
OpenID Connect (OIDC) [24] is an authentication framework that allows users to be authenticated with an SSO identity through a third-party service, such as an email service. It extends OAuth 2.0 for this purpose. Unlike the example in Section 2.1, the SSO identity has no relationship to the third-party service.
In OIDC, an End-User (EU) is authenticated by an OpenID Provider (OP). The EU grants OIDC-specific Scopes to the EU's intended service, e.g., to his email client, which is called the Relying Party (RP). This communication flow is supported by OAuth 2.0, where the EU corresponds to a RO, the OP to an AS, and the RP to a Client. The OP issues an ID Token (IDT) to the RP. This IDT contains claims about the authentication
Figure 1: OAuth 2.0 protocol flows.
event which typically includes information about the EU, such as his name or address. Since this Authorization Request is for authentication, it is called an Authentication Request in OIDC. With this mechanism, an EU can be authenticated with his SSO identity by different services without providing his credentials. Instead, the OP passes profile information about the EU to the RP as identity claims in the IDT, but the EU controls which information is passed.
Figure 1(a) describes a simplified Authentication Request where the EU is authenticated by the RP via the OP. First, the RP requests access to the profile Scope. If the EU grants this Scope, the RP is authorized to access the EU's profile information (1). Then, the EU is authenticated by the OP (2) and authorizes the RP for the requested Scope (3). Finally, the OP issues an IDT in addition to the AT and an optional RT (4). This IDT contains the identity claims related to the authorized profile Scope, such as the EU's name, profile picture, or date of birth, and is signed with the OP's private key \(K_{OP}^{-}\). The RP can verify the signature of the identity claims with the OP's public key \(K_{OP}^{+}\).
### Trust Relationship
The following describes the resulting trust relationship between entities in OAuth 2.0 and OIDC as shown in Figure 1(b).
The Client/RP never sees any credentials from the RO/EU because the authentication process is performed solely by the AS/OP. Therefore, the Client/RP must rely on the AS/OP to correctly verify the identity of the RO/EU and that the AS/OP will issue the correct AT/IDT of the authenticated RO/EU. Once the Client/RP of the RO/EU receives the tokens, the
Figure 2: OpenID Connect Authentication Flow and trust relationship.
RO/EU may not be able to verify what it is doing with them. The Client/RP may even leak the tokens, so the RO/EU must trust that it is working as intended. To minimize this risk, the RO/EU restricts the Client/RP's access to only the necessary PRs and identity claims. The RO/EU must also trust the AS/OP to protect his identity. This includes a secure login process and secure credential storage, but also that the AS/OP will not impersonate his account. Such impersonation would not even require any credentials since the AS/OP needs only its private key \(K^{-}_{AS}/K^{-}_{OP}\) to sign an AT/IDT.
### Single Sign-On with OAuth 2.0 and OIDC
Today, many services require dedicated accounts, forcing users to remember multiple service-specific credentials. With SSO systems, users only need to remember the credentials for one account. They can use this SSO identity to log in to multiple service accounts. Logging in to a service account with this SSO identity is typically solved with a combination of OAuth 2.0 and OIDC, as depicted simplified in Figure 3.
First, the Client initiates an OAuth Authorization Request to the service-specific AS (1). Instead of using service account credentials, the RO chooses to log in with his SSO identity via OIDC. To do this, the AS acts as a RP and initiates an OIDC Authentication Request to the OP (2). The EU is then authenticated by the OP with his credentials (3) and consents to the OP providing the RP with access to his profile information (4). Technically, this
Figure 3: Simplified authentication to an AS with OIDC and authorization of a Client with OAuth 2.0.
consent is an authorization in the sense of OAuth 2.0. The OP then responds with an IDT to the RP (5), which authenticates the EU to the AS and completes the OIDC-based authentication process. Now the authenticated RO authorizes the requested Scopes of the Client (6). Finally, the AS issues an AT and an optional RT to the Client (7).
In an SSO environment, the trust relationship changes slightly. While the user has to trust the AS not to impersonate his service account, he also has to trust the OP not to impersonate any of his service accounts. This makes the OP very powerful because it could impersonate any of the user's service accounts. Therefore, EUs should only choose trusted OPs.
## 3 Related Technologies
We review related technologies for end-to-end authentication and compare them to OIDC\({}^{\mathbf{2}}\).
### Identity Providers and Certificates
In a Public Key Infrastructure (PKI) [30], a Certificate Authority (CA) verifies that an entity's real-world identity and long-term public key \(K_{E}^{+}\) belong together, records them in a document, signs it, and issues it in the common X.509 certificate format (RFC 5280 [6]). Such X.509 certificates are used e.g. in the Secure / Multipurpose Internet Mail Extensions (S/MIME) standard (RFC 8551 [26]) to authenticate and encrypt email. However, due to the cumbersome identity verification and certificate installation process, only 2.50% of over 81 million emails examined in a study [28] were signed with S/MIME.
To simplify this process, Cisco proposed an expired Internet draft [2] where an Identity Provider (IdP) issues X.509 certificates to its users. According to their white paper [5], Cisco uses these certificates for end-to-end user authentication in the Webex videoconferencing service. If the session partner trusts the issuing IdP, the partner can authenticate the holder of this certificate, e.g., with a challenge/response method [15]. The draft [2] is designed for the SAML2 authentication standard [4], but OIDC performs better for mobile devices and cloud computing [20]. This may be one reason why the design has not been adopted by other applications and IdPs.
Conceptually, the presented approach is similar to OIDC\({}^{\mathbf{2}}\); we continue with the differences. X.509 is a binary format limited to a small set of standardized identity-related fields [26]. OIDC\({}^{\mathbf{2}}\) instead uses JSON Web Tokens (JWT) in RFC 7519 [14] to represent claims about the user. JWTs
are more flexible because the IdP can provide any claims in a JSON object, many of which are already standardized in the OIDC core specification [24] and eKYC [16]. JSON is also easier to parse in web applications than X.509 certificates. Also, long-lived user certificates require more attention than short-lived ICTs. In particular, certificate revocation lists must be managed and verified. In addition, certificates may need to be installed on different devices, which adds overhead and can create security issues.
### Self-Sovereign Identity (SSI)
In Self-Sovereign Identity (SSI) [19], participating entities generate their own asymmetric key pairs \(K^{\pm}\). Entities are identified by by their decentralized identifier \(DID\), which is linked to at least one public key \(K^{+}\). Entities store their private key \(K^{-}\) in their digital wallet, e.g. an app on their smartphone. This can be used for end-to-end authentication with the key pair \(K^{\pm}\).
SSI describes three entities: the Issuer (I), the Holder (H), and the Verifier (V). The issuer knows or verifies the credentials of the holder and issues them to the holder as a Verifiable Credential (VC). This VC is signed by the issuer with his private key \(K^{-}_{I}\); it contains the issuer's \(DID_{I}\) and the credentials and \(DID_{H}\) of the holder. The holder holds this VC in his wallet and presents it to a verifier as a Verifiable Presentation (VP). This VP is signed by the holder with his private key \(K^{-}_{H}\); it contains the VC and the verifier's \(DID_{V}\). The verifier verifies this VP by checking the issuer's signature on the VP and the issuer's signature on the VC. If the verifier accepts the issuer as a trusted authority for issuing the holder's credentials, then the verifier trusts that these credentials belong to the holder.
Early implementations of SSI made use of blockchain technology [8] and used a public distributed ledger [7] to store the mapping of a \(DID\) to its associated public keys. Modern approaches are based on OAuth 2.0 and OpenID Connect, such as the mobile driving license in the United States standardized in ISO/IEC 18013-5:2021 [13]. This approach implements the Self-Issued OpenID Provider v2 (SIOPv2) [31] draft in the wallet app for key management. Driving license offices provide OAuth 2.0 based interfaces defined in the OpenID for Verifiable Credential Issuance (OpenID4VCI) draft [17] to issue driving licenses as VCs in the W3C format [27]. Drivers present these VCs as VPs to police officers using OAuth 2.0 based interfaces between smartphones defined in the OpenID for Verifiable Presentations (OpenID4VP) draft [29]. Another OIDC draft describes the issuance of identity claims of the ID Token as a VC [1]. This is similar to our approach, but requires the full OpenID4VC infrastructure to be deployed, which is currently rare.
Although SSI is now being rolled out for government use cases, there are
still open issues regarding usability [25][32] and identity recovery [33]. Since the private key is a long-term key that could be leaked during its lifetime, the system requires a key revocation list. But as argued by Ronald L. Rivest more than two decades ago [23], revocation lists should be avoided for security reasons. Modern technologies such as Hardware Security Modules (HSM) or Trusted Platform Modules (TPM) address this problem by protecting the private key inside the hardware. Here, the private key cannot be exported and can only be used for signing after the platform integrity has been verified and the user has been authenticated. This creates problems when a user wants to use VCs from other devices. In addition, if the device is lost or broken, the user needs a recovery method for the private key and DID that must be configured in advance.
OIDC\({}^{\mathbf{2}}\) does not have these problems. It uses short-lived ephemeral key pairs and ICTs, requires no specific hardware or software platform, and leverages existing account recovery capabilities. Compared to SSI approaches, it does not require currently rarely deployed frameworks such as installed wallet apps, issued VCs, and a huge amount of implemented new standards. Instead, OIDC\({}^{\mathbf{2}}\) requires a small extension of OPs to use existing OIDC accounts. In contrast, the ICT may also contain claims that the issuing OP is not a trusted source of, which will be discovered in Section 2.3.
### OpenPubkey
BastionZero has developed OpenPubkey [12] which is very similar to OIDC\({}^{\mathbf{2}}\). The RP of an EU can create a Client Instance Claim (CIC) that contains, among others, the RP's public key \(K_{C}^{+}\). When requesting an ID Token (see Figure 2a), the RP can optionally provide a nonce in the Authentication Request (1), which we omitted in Section 2.2. The OP will then insert this nonce into the ID Token before issuing it (4). With OpenPubkey, the RP offers its hashed CIC as a nonce to be inserted into the ID Token. After receiving the ID Token, the RP appends the CIC and signs it with its private key \(K_{C}^{-}\), resulting in a PubKey (PK) Token. The RP can use this PK Token to authenticate as the EU.
However, from our point of view, this approach makes the whole OIDC ecosystem insecure. In an SSO context, the RP is often a login service (see the AS in Figure 3) that the EU usually authorizes to access his profile information. If the service is malicious, the RP can request a PK Token with its own public key \(K_{C}^{+}\) to impersonate the EU without his knowledge. The authors' solution to this problem is to have the authenticating user only accept e2e authentications from a trusted RP, identified by its Client ID contained in the PK Token. First, this leaves a high burden on the user,
which is unacceptable since it is difficult for the user to identify trusted RPs. Second, the EU's trust in a service, such as an online store, may be sufficient to be authenticated by that store, but it may not be sufficient to allow the store to impersonate him. Third, in open communication systems such as email, there are many clients, and it is unlikely that all of them are trusted. This limits the use of OpenPubkey to a small set of explicitly trusted services and clients. We believe that these three problems are unacceptable. In contrast, with OIDC\({}^{\mathbf{2}}\), the EU does not risk being impersonated when logging in to a malicious service.
OIDC\({}^{\mathbf{2}}\) solves this problem by introducing a new ID Certification Token (ICT) that can only be requested by an RP with sufficient scope for e2e authentication. This means that an EU can control whether to issue only an ID Token or also an ICT.
## 4 OIDC\({}^{\mathbf{2}}\): Open Identity Certification with OIDC
This section describes the OIDC\({}^{\mathbf{2}}\) concept in more detail and proposes a simple OIDC protocol extension to support it.
### Concept of OIDC\({}^{\mathbf{2}}\)
We define new terminology, introduce the Identity Certification Token (ICT), and explain how to use it.
#### 4.1.1 Terminology
Consider a user of one application authenticating to a user of another application. The user authenticating himself is called the End-User (EU), his application is called the Client. The other user is called the Authenticating User (AU), and his application is called the Authenticating Party (AP). We also assume that the EU has an SSO identity provided by an OpenID Provider (OP) trusted by the AU. The terminology used for the EU, Client, and OP is consistent with the combined OAuth 2.0 and OIDC scenario described in Section 2.4. However, OIDC\({}^{\mathbf{2}}\) does not require this scenario.
#### 4.1.2 Identity Certification Token (ICT)
We introduce the ICT, which addresses the end-to-end authentication use case. The ICT contains the Client's verified public key \(K_{C}^{+}\), an application
specific Scope, an expiration date, and a unique identifier of the EU's SSO identity. It may also contain other claims about the user which are not necessarily verified by the OP.
### ICT Request
The Client uniquely requests an ICT from the OP for each end-to-end (e2e) authentication process. Figure 3(a) simplifies the ICT request.
First, the Client performs an OAuth 2.0 Authorization Request as described in Section 2.1 (1-4) to obtain an Access Token (AT) for the ICT Request. For this purpose, the AT requires a Scope sufficient to access the EU's profile information, e.g., profile, and an e2e Scope, e.g., e2e_auth_email. The Client then uses the AT to authorize an OAuth 2.0 Resource Request for an ICT (5) from the OP, called an ICT Request. For this purpose, the Client uniquely generates a new public key \(K_{C}^{+}\) and presents it to the OP. The Client also presents a Proof of Possession (PoP) of the corresponding private key \(K_{C}^{-}\), e.g., by signing a unique nonce. The OP verifies the validity of the AT and the PoP (6). If valid, the OP signs the ICT with its private key \(K_{OP}^{-}\) corresponding to its published public key \(K_{OP}^{+}\) and responds with the ICT (7). When the ICT expires and a new ICT is required, the Client repeats steps (5) to (7) to request a new ICT for a new key pair.
Figure 4: Protocol extension of OIDC\({}^{2}\).
### E2E Authentication with ICT
The Client uses the ICT to authenticate its EU to the AP's AU as shown in Figure 4b. First, the Client passes the ICT containing its public key \(K_{C}^{+}\) to the AP and provides a PoP for the corresponding private key \(K_{C}^{-}\) (1). To do this, the Client signs either a unique nonce provided by the AP or a unique session-specific identifier. Alternatively, the Client can prove the possession by establishing a secure channel based on the private key \(K_{C}^{-}\). In Section 6, we show and explain use cases that take advantage of these three options. The AP then verifies the Client's PoP (2) using the public key \(K_{C}^{+}\) from the ICT and verifies the AU's trust relationship with the OP (3). This may require user interaction or the use of whitelists, discussed further in Section 2.3. If the AU trusts the OP, the AP checks the expiration date and verifies the signature of the ICT using the OP's public key \(K_{OP}^{+}\) (4). If successful, the EU has proven its SSO identity to the AU (5).
## 5 Security Considerations
First, we discuss how OIDC\({}^{\bf 2}\) shifts the burden of thorough authentication from service providers to identity providers. Then, we analyze the trust relationship between OIDC\({}^{\bf 2}\) entities and propose a trust classification for OPs. Finally, we propose authentication with multiple ICTs and discuss the correlation between the validity of an ICT and its corresponding key pair.
### Service Provider vs. OpenID Provider
In most communication services, users must rely on the identity claims of their communication partners provided by the service provider, with no way to verify them. OIDC\({}^{\bf 2}\) allows users to verify each other's identities without having to trust the service provider. This only requires the Client to implement OIDC\({}^{\bf 2}\) and the protocol to provide a way to exchange the ICTs. The service provider does not need to implement OAuth 2.0 for the Client or provide an OP. This improves the overall security of the service and prevents privacy issues by eliminating the need for the service provider to collect sensitive information about its users.
### Trust Relationship
Figure 5 shows an overview of the trust relationship between the entities of the OIDC\({}^{\bf 2}\) protocol.
On the proving side, the End-User (EU) trusts his OP to protect his identity from impersonation attacks and not to impersonate him. This includes that the OP will only issue authorized ICTs. Furthermore, the EU trusts that his Client will operate as intended. This means that the Client will protect its private key \(K_{C}^{-}\) from third parties and use the ICT only for the intended authentication processes. To limit potential misuse by the Client, the ICT is scoped to a specific context. For example, this prevents an email client from misusing the ICT to sign contracts on behalf of the EU.
On the authentication side, the Authenticating User (AU) trusts the OP to protect the EU's identity and to sufficiently verify the Client's possession of its private key \(K_{C}^{-}\). The AU also trusts the OP to certify sufficiently trustworthy identity claims with the issued ICT, which we will discuss in more detail in Section 5.3. To ensure that the authentication process is intended by the EU, the AU trusts the OP to issue only EU-authorized ICTs. The AU must also trust his Authenticating Party (AP) to correctly verify the received ICT and PoP.
The AU needs to trust the OP. We offer two solutions that can be combined. First, the AP trusts a trusted identity federation such as the Global Assured Identity Network (GAIN) [10], which consists of international OPs such as banks, insurance companies, or government institutions, all of which manage fully verified real-world identities. Second, the AU maintains his own whitelist of OPs, such as social media platforms or his business partners. Not every OP has the same level of trustworthiness, so we classify them in the next section.
### Classification of OpenID Providers
When working with OIDC\({}^{2}\), we suggest three classes of OPs to consider.
#### 5.3.1 Insecure OpenID Providers
OPs can be considered insecure for a variety of reasons. They may not be able to sufficiently protect their users' credentials, or they may be untrustworthy
Figure 5: Trust relationship between OIDC\({}^{2}\)’s entities.
for political or economic reasons. For example, they may certify potentially false or insufficiently verified claims. If an AU considers an OP insecure, his AP will not accept any ICTs issued by that OP.
#### 5.3.2 Authoritative OpenID Providers (AOP)
We classify an OP as an Authoritative OP (AOP) for specific claims, if the AU accepts the OP as an authority for those claims and trusts the OP to protect managed identities. For example, an email server's OP is authoritative for email addresses within its own domain. Because an OP issues a unique subject identifier for each SSO identity by specification, an OP is always authoritative for its associated sub claim.
This makes AOPs sufficient for scenarios where an EU wants to be authenticated with specific claims. For example, if the AU knows the EU's email address, the EU uses an ICT issued by his email provider's OP to authenticate on a social media platform. However, AOPs are only sufficient to certify identity claims for which they are an authority. To certify real-world identity claims such as names or addresses, the AOP must typically be the OP of a trusted government organization.
#### 5.3.3 Verifying OpenID Providers (VOP)
There is not always an AOP for every claim the EU wants to be authenticated with. Instead, the EU can use a third-party service that the AU trusts to sufficiently verify his identity claims and protect his account. We call the OP of this third-party service a Verifying OpenID Provider (VOP). This VOP could check the EU's passport to verify his name, or send him a verification code via SMS to verify his phone number.
There are already OpenID Providers such as banks or insurance companies that are required by law to verify their customers' claims. However, such verification processes are often costly, which is why VOPs often do not verify all claims or offer it as an optional service, such as the social media platforms Facebook and Twitter. Both can be AOPs at the same time. For example, banks are VOPs for the name of an EU, but also AOPs for bank account numbers.
### Authentication with Multiple ICTs
The classification of an OP is up to the AU, i.e., the AU may not accept ICTs from certain OPs. Since an EU may not know the AU's classification
in advance, the EU can present ICTs from different OPs and the corresponding PoPs to increase the likelihood of successful authentication by the AU. However, this requires more work for the EU as he has to log in to all these OPs to receive ICTs. If the AP receives multiple ICTs, it presents them to the AU, which then selects the most trusted issuer or rejects them all. Furthermore, the EU must be aware that presenting multiple ICTs also exposes all his presented accounts to the AU.
### Validity of ICTs and Client Key Pairs
An ICT contains the public key \(K_{C}^{+}\) of the Client. By issuing this ICT, the OP certifies that the corresponding EU authorized the Client for e2e authentication with the contained identity claims. An attacker trying to impersonate the EU needs the corresponding private key of the ICT. We minimize this potential misuse of the ICT by a leaked private key by making the ICT short-lived and limited in scope. Since a few minutes are sufficient for most use cases (see Section 6), we recommend setting the ICT validity period to no more than 1 hour.
We propose that an ephemeral and unique key pair \(K_{C}^{\pm}\) expires along with its associated ICT, eliminating the need for key revocation mechanisms. However, Sections 6.2 and 6.3 show that long-term key pairs are useful in some cases. Therefore, we further propose that an ICT may also contain a long-term public key, which must be indicated by providing the key revocation server of the key. Such a key is valid until revoked and is associated with the claims in the ICT. Some services control the lifetime of public keys by associating them with user profiles. An example of this approach is the Signal protocol (see Section 6.2). In such applications, a user can be authenticated with a public key received from an ICT as long as the public key contained in it is associated with the profile. In any case, an active session can remain valid even after the underlying key pair \(K_{C}^{\pm}\) expires (see Section 6.1).
## 6 Use of OIDC2 in Applications
We explain how end-to-end authentication is currently implemented in the communication applications video conferencing, instant messaging and email and how it can be improved by OIDC2. In addition, we recommend validity periods for ICTs depending on these applications.
### Video Conferences
In many videoconferencing systems, users must rely on the identities of their communication partners provided by the service's IdP. As an incident [21] demonstrated in 2022, new technologies such as deep fakes show that relying on identifying a communication partner in a video stream does not suffice. We explain how video conferencing services use OAuth 2.0 and OpenID Connect and how they can benefit from OIDC2.
#### 6.1.1 End-to-End Authentication in Video Conferences
In video conferencing (VC), users log in to the service provider's OAuth 2.0 Authorization Server (AS) either directly with their credentials or through the OP with their OIDC identity, as explained in Section 2.4. After authentication, the VC service provider's AS gets an ID Token from the OP. After authorization, the Client gets an Access Token (AT) from the AS. The Client uses the AT to prove its authorization to the VC server. The AT contains the EU's VC account ID, which the VC server uses to identify the EU's profile that the VC server provides to the communication partner.
A communication partner, aka the AU, identifies the EU using the identity claims and its AP's public key \(K^{+}\) provided by the VC server. Based on this public key, the client of the AU, aka the AP, establishes an end-to-end encrypted communication channel with the Client of the EU. If the AU does not trust the VC server, he cannot trust the identity of the EU and therefore cannot be sure with whom he is establishing a secure channel.
#### 6.1.2 End-to-End Authentication with OIDC2
We propose that the EU authenticates to the AU using an ICT obtained directly from the EU's OP. After a mutual ICT exchange, the Client and the RP use the contained verified public keys to establish a secure channel, as shown in Figure 6.
First, Client A generates an ephemeral key pair \(K_{A}^{\pm}\) and contacts the OP of the EU's choice to obtain an ICT containing its public signing key \(K_{A}^{+}\) (1). The Client signs this ICT and a unique session identifier with its private key \(K_{A}^{+}\) and sends it to the AP via the VC server (2). The session identifier is required to prevent relaying attacks. If the AU trusts the EU's OP, Client B generates its own ephemeral key pair \(K_{B}^{\pm}\) (3) and requests an ICT containing its public signing key \(K_{B}^{+}\) from the AU's OP (4). The AP signs its ICT and the session identifier with its private signing key \(K_{B}^{-}\) and responds to the Client via the VC server (5). If the EU trusts the AU's OP (7), then the
client and AP have successfully performed mutual authentication enabling them to establish a secure e2e encrypted and authenticated channel (7).
#### 6.1.3 Discussion
We explained how OIDC\({}^{\mathbf{2}}\) can be used in the context of OAuth 2.0 and OpenID Connect. OIDC\({}^{\mathbf{2}}\) can also be used to establish a secure channel based on an untrusted communication channel. The PoP is the signature over a unique session identifier and the ICT. This session identifier must not be reused and should therefore contain unique components such as a timestamp and an identifier of the communication partner. Since starting a videoconference takes only a few seconds, the validity of an ICT can be very short, e.g., 5 minutes, to avoid time synchronization problems. When the ICT expires, an active secure channel remains valid until it is closed.
### Instant Messaging
We suggest how the instant messaging (IM) service Signal [22] could benefit from OIDC\({}^{\mathbf{2}}\).
#### 6.2.1 End-to-End Authentication in Signal
In the Signal protocol [18], users are identified by their phone number and public key \(K^{+}\), both of which are verified and published by the service provider. Two users establish an end-to-end encrypted communication channel and authenticate using digital signatures with their respective private key \(K^{-}\), which remain on their primary devices. To prove his identity to a communication partner and detect man-in-the-middle attacks, a user must
Figure 6: End-to-end authentication with OIDC\({}^{\mathbf{2}}\) for video conferences.
present his public key via QR code. The partner's client then verifies that the scanned public key \(K^{+}\) matches the public key \(K^{\prime+}\) used to authenticate the channel. This is a strong but cumbersome verification mechanism that requires either a face-to-face meeting or a secure side channel.
#### 6.2.2 End-to-End Authentication with OIDC\({}^{\bf 2}\)
We propose an end-to-end authentication method for instant messaging based on OIDC\({}^{\bf 2}\), illustrated in Figure 7.
Assume that two IM clients have already established a secure channel and know each other's public key \(K^{\prime+}\) provided by the service provider. Using OIDC\({}^{\bf 2}\), the IM Client requests an ICT from its EU's OP for this public key \(K^{+}\) and sends the ICT over the secure channel to the AP. If the AU trusts the EU's OP, the AP verifies the received ICT and compares the contained public key \(K^{+}\) with the assumed public key \(K^{\prime+}\).
#### 6.2.3 Discussion
This example shows that an ICT can also be used to authenticate an established secure channel. Therefore, the ICT must be issued for the public key \(K^{+}\) used to authenticate the channel. Being able to send messages through this secure channel serves as implicit PoP for the corresponding private key \(K^{-}_{C}\).
The ICT signed by an OP trusted by the AU thus replaces the need for a face-to-face meeting or a secure side channel. This requires that the ICT is valid when the AP receives it and adds a timestamp to it. After that, the AP can verify the ICT at any time later, so the AU does not need to immediately confirm trust to the OP. Since IM services deliver their messages to the receiving client very quickly, we recommend a validity period of 5 minutes for ICTs in this context. If the ICT is transmitted when the AP is offline, the verification process must be repeated.
This approach does not use any Signal-specific features and can therefore be applied to any other IM service. It shows that existing key management systems like Signal's can be extended with OIDC\({}^{\bf 2}\) as an authentication layer.
Figure 7: Unilateral E2E IM authentication with OIDC\({}^{\bf 2}\).
It also shows that OIDC\({}^{\mathbf{2}}\) can be used without any OAuth 2.0 Authorization Server involved in the communication protocol.
### Email
For the past three decades, S/MIME and PGP have been state-of-the-art standards for secure end-to-end authenticated and optionally encrypted email communication. But with 2.8% of signed and 0.06% of encrypted emails [28], neither of these technologies has taken off, probably due to their complex key generation and management. We briefly describe email authentication with PGP and S/MIME, propose a variant using OIDC\({}^{\mathbf{2}}\), and explain its advantages.
#### 6.3.1 End-to-End Authentication with PGP and S/MIME
The user generates a long-term PGP key pair \(K_{PGP}^{\pm}\) and imports it into his email client. When sending an email, the client attaches the public PGP key \(K_{PGP}^{+}\) and signs the whole email with the private PGP key \(K_{PGP}^{-}\). The recipient of the email then verifies the signature using the provided public key \(K_{PGP}^{+}\). To authenticate the sender, the receiver must know whether the public key \(K_{PGP}^{+}\) belongs to the sender. This requires a cumbersome Web of Trust-based approach in which people must often meet in person to sign each other's key pairs or exchange public key fingerprints. Email authentication with S/MIME works similarly, but with a PKI-based approach using S/MIME certificates instead of the Web of Trust approach. The drawbacks have been discussed in Section 3.1.
#### 6.3.2 End-to-End Authentication with OIDC\({}^{\mathbf{2}}\)
While S/MIME benefits from the trust layer of a PKI, PGP lacks such a layer. This is where OIDC\({}^{\mathbf{2}}\) can help, as shown in Figure 8. For each email, the
Figure 8: E2E email authentication with PGP and OIDC\({}^{\mathbf{2}}\).
EU's Client requests a unique ICT for a uniquely generated ephemeral public PGP key \(K^{+}_{PGP}\). This requires the EU to log in to his OP and authorize the issuance of the ICT for the email context. The Client then attaches the ICT and PGP-related attachments to the email, e.g. the public PGP key \(K^{+}_{PGP}\) for normal PGP compatibility. Before sending, the Client signs the entire email with its private PGP key \(K^{-}_{PGP}\).
The receiving Client uses the attached PGP public key \(K^{+}_{PGP}\) to verify the signature as usual in PGP. To authenticate the sender, the receiving Client, aka Authenticating Party (AP), verifies the ICT using the OP's public key \(K^{+}_{OP}\) and compares its contained public key to the attached public PGP key \(K^{+}_{PGP}\). To verify the trust level of the OP, the AP can use preconfigured policies or ask his user, aka the Authenticating User (AU). If the integrity of the ICT is verified and the OP is trusted, the AU can rely on the identity claims that identify the EU.
#### 6.3.3 Discussion
The email including attachments combined with timestamps is considered unique. Thus, the signed email serves as Proof of Possession (PoP) of the corresponding private key \(K^{-}_{PGP}\). If an attacker mutates the email or replaces the ICT, it will be detected when verifying the signature.
The AU can rely on the inbox timestamp added to the email by his trusted email server when verifying the PoP and ICT. Therefore, a validity period of 1 hour is sufficient as most emails are delivered to the server within this period. However, if the AU does not trust his email server, his trusted email client must receive the email within this period. Otherwise the ICT will expire and the Client cannot trust the key pair and therefore cannot trust the email.
This shows the limitations of short-term keys in OIDC\({}^{2}\). However, as described in Section 5.5, the EU could choose a long-term key such as a normal PGP key instead of the ephemeral key. The OP must then add the URL of the key server that publishes the PGP key revocation to the ICT. When verifying the ICT, the AP must also verify that the PGP key has not been revoked.
## 7 Implementation
We present a simple extension for any OIDC server to handle ICT Requests including a PoP for the verification of the Client's public key. The implemen
tation is available on GitHub1. However, we recommend it only for testing purposes.
Footnote 1: [https://github.com/oidc2/op-extension](https://github.com/oidc2/op-extension)
To request a token, a Client sends a Token Request to the so-called /token Endpoint of the OpenID Provider. That is a special path in the URL of the OIDC server. Moreover, there is also a /userinfo Endpoint that returns information about the user upon a Userinfo Request.
Many services are not directly reachable on the Internet but via a reverse proxy. A reverse proxy is an HTTP server that resides in the same network as the server, terminates the TLS connection between client and server, and relays data to and from the application server from and to the client.
We propose the generic extension to an OIDC server in Figure 9 so that the OIDC server can handle ICT Requests.
We define a novel /ict Endpoint which runs as a microservice separately from the OIDC server. The /ict Endpoint and the OIDC server operate behind a reverse proxy. The reverse proxy forwards any conventional OIDC requests to the OIDC server and ICT Requests to the /ict Endpoint.
The /ict Endpoint expects an AT with Scopes for identity claims, e.g., profile for name and birth date, and a scoped context for end-to-end authentication, e.g., e2e_auth_email for the email context, in the ICT Request. It extracts the AT, and includes it in a Userinfo Request to the OIDC server. After reception of the user information, the /ict Endpoint checks whether the EU possesses the private key \(K_{C}^{-}\) for the public key \(K_{C}^{+}\) contained in the ICT request, which is explained later. If the check was successful, the /ict Endpoint issues an ICT with appropriate information and signs it with the private key \(K_{OP}^{-}\) of the OP. Thus, \(K_{OP}^{-}\) must be available to the /ict Endpoint. This is a reason why we recommend this simple prototype only for testing purposes but not for production. Finally, the /ict Endpoint returns the ICT to the Client.
To save communication overhead between the /ict Endpoint and the Client, we propose the following PoP. The Client chooses a nonce, concatenates it with a timestamp, signs the concatenation with its private key \(K_{C}^{-}\), and includes concatenation and signature in the ICT Request. The /ict
Figure 9: Simple extension to an OIDC server to handle ICT Requests.
Endpoint verifies the signature with the public key \(K_{C}^{+}\) and caches the nonce for 30 seconds. To counter replay attacks, the /ict Endpoint accepts only ICT Requests with timestamps in the concatenation that deviate at most 15 seconds from its own time and whose nonce is not in the cache.
## 8 Evaluation
We evaluate the performance of the provided /ict Endpoint, written in Go, compared to the /token Endpoint of the Keycloak 22.0.1 and Authentik 2023.6.1 OIDC server software. They are written in Java and Go, respectively.
We conduct the following two experiments. (A) A Client sends a Refresh Token to the /token Endpoint of the OIDC server and obtains an ID Token, an RT, and an AT. (B) A Client generates a PoP, sends an AT to the new /ict Endpoint, and obtains an ICT. Both experiments are conducted over one minute, i.e., a token is requested, returned, and then the next request is sent. We ran each experiment 20 times and computed mean requests per minute including confidence intervals with a confidence level of 95% (\(CI_{0.95}\)) using the Student's t-distribution. We automate this process with the help of a web application2.
Footnote 2: The application is programmed in Angular 15 and its code is available on GitHub [https://github.com/oidc2/benchmark](https://github.com/oidc2/benchmark)
The OIDC server, its user database based on PostgreSQL 15.2, and the new /ict Endpoint run in separate Docker containers3. The host is a Lenovo ThinkPad T14s with an 2.1 GHz AMD Ryzen 5 PRO 4650U processor, 16 GB RAM, and a 512 GB SSD with Windows 11 22H2 x64, and running the Docker engine4 24.0.2 in WSL 25. While Authentik can import and export any private keys, Keycloak cannot export private keys and it can import only RSA keys. Therefore, we chose RS256 for signatures, i.e., a 2048 bit RSA key with the SHA-256 hashing algorithm to make experiments with different server software comparable.
Footnote 3: [https://github.com/oidc2/op-extension/blob/main/docker-compose.yaml](https://github.com/oidc2/op-extension/blob/main/docker-compose.yaml)
Footnote 4: [https://www.docker.com/](https://www.docker.com/)
Footnote 5: [https://learn.microsoft.com/en-us/windows/wsl/](https://learn.microsoft.com/en-us/windows/wsl/)
With Keycloak, a mean request rate of 994.00 IDTs (A) (\(CI_{0.95}\): [992.97; 995.03]) and 988.20 ICTs (B) (\(CI_{0.95}\): [986.72; 989.68]) could be served per minute6. In contrast, with Authentik, 190.95 IDTs (A) (\(CI_{0.95}\): [190.35; 191.35]) and 891.65 ICTs (B) (\(CI_{0.95}\): [886.04; 897.26]) could be served per minute. Thus, the tested version of Keycloak is more efficient than the tested
version of Authentik. Moreover, the provided /ict Endpoint is as efficient as the built-in /token Endpoint or even more efficient.
We compare the work done by the /token Endpoint and the /ict Endpoint. (A) The /token Endpoint validates the RT, creates an IDT, and signs the AT and the IDT with its private key. The integrity of the RT is secured differently7. (B) The /ict Endpoint validates the PoP for the Client's public key, and requests user information using an AT from the /userinfo Endpoint, which validates the AT. Then the /ict Endpoint creates and signs the ICT.
Footnote 7: Authentik uses a nonce for the RT stored in the database while Keycloak secures the RT with an HMAC.
The effort for creating and signing an IDT in (A) and an ICT in (B) is possibly similar, as both require RT/AT validation, a database request, and a token signature. Thus, creating an RT and AT, and signing the AT in (A) is apparently equal or more time consuming than creating the PoP at the Client and validating the PoP at the /ict Endpoint in (B).
## 9 Conclusion and Future Work
This paper introduced Open Identity Certification with OpenID Connect (OIDC\({}^{\mathbf{2}}\)), which allows End-Users (EUs) to request a verifiable Identity Certification Token (ICT) from an OpenID Provider (OP). An ICT contains claims about an EU and a public key chosen by the EU. Authenticating Users can authenticate EUs with an ICT if they trust his issuing OP. We compared OIDC\({}^{\mathbf{2}}\) to existing end-to-end authentication methods and found that OIDC\({}^{\mathbf{2}}\) is easier to use and improves security by eliminating the need for revocation lists. We suggested how OIDC\({}^{\mathbf{2}}\) can be implemented based on the OIDC framework. We discussed security considerations for and general improvements with OIDC\({}^{\mathbf{2}}\): the trust relationship among its entities, a classification of OPs and their utilization with OIDC\({}^{\mathbf{2}}\), authentication with multiple ICTs to increase the likelihood of successful authentication, as well as appropriate (short) validity periods for ICTs. Furthermore, we proposed how OIDC\({}^{\mathbf{2}}\) can be used for simple and user-friendly end-to-end authentication for video conferences, email, and instant messaging. Finally, we provided a simple, open-source extension for OIDC server software to support OIDC\({}^{\mathbf{2}}\) for testing purposes. We proved its compatibility with Authentik and Keycloak and the performance of the new /ict Endpoints is comparable to or better than the performance of the existing /token Endpoints.
To demonstrate the feasibility of OIDC\({}^{\mathbf{2}}\) for end-to-end authentication, we plan to integrate OIDC\({}^{\mathbf{2}}\) for video conferences based on the open WebRTC
protocol, for instant messaging based on the open Matrix protocol, and for email communication based on the PGP standard.
|
2309.10902 | VALID: A perceptually validated Virtual Avatar Library for Inclusion and
Diversity | As consumer adoption of immersive technologies grows, virtual avatars will
play a prominent role in the future of social computing. However, as people
begin to interact more frequently through virtual avatars, it is important to
ensure that the research community has validated tools to evaluate the effects
and consequences of such technologies. We present the first iteration of a new,
freely available 3D avatar library called the Virtual Avatar Library for
Inclusion and Diversity (VALID), which includes 210 fully rigged avatars with a
focus on advancing racial diversity and inclusion. We present a detailed
process for creating, iterating, and validating avatars of diversity. Through a
large online study (n=132) with participants from 33 countries, we provide
statistically validated labels for each avatar's perceived race and gender.
Through our validation study, we also advance knowledge pertaining to the
perception of an avatar's race. In particular, we found that avatars of some
races were more accurately identified by participants of the same race. | Tiffany D. Do, Steve Zelenty, Mar Gonzalez-Franco, Ryan P. McMahan | 2023-09-19T19:57:03Z | http://arxiv.org/abs/2309.10902v2 | # VALID: A perceptually validated Virtual Avatar Library for Inclusion and Diversity
###### Abstract.
As consumer adoption of immersive technologies grows, virtual avatars will play a prominent role in the future of social computing. However, as people begin to interact more frequently through virtual avatars, it is important to ensure that the research community has validated tools to evaluate the effects and consequences of such technologies. We present the first iteration of a new, freely available 3D avatar library called the _Virtual Avatar Library for Inclusion and Diversity_ (_VALID_), which includes 210 fully rigged avatars with a focus on advancing racial diversity and inclusion. We present a detailed process for creating, iterating, and validating avatars of diversity. Through a large online study (\(n=132\)) with participants from 33 countries, we provide statistically validated labels for each avatar's perceived race and gender. Through our validation study, we also advance knowledge pertaining to the perception of an avatar's race. In particular, we found that avatars of some races were more accurately identified by participants of the same race.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
and ethnic categorization, VALID includes seven races as recommended by the U.S. Census Bureau report (Census, 2020) (which differs from the 2020 U.S. Census): American Indian or Native Alaskan (AIAN)3, Asian, Black or African American (Black), Hispanic, Latino, or Spanish (Hispanic), Middle Eastern or North African (MENA), Native Hawaiian or Pacific Islander (NHPI), and White.
Footnote 3: We use racial abbreviations as defined in the U.S. Census Bureau report.
VALID includes 210 fully rigged virtual avatars designed to advance diversity and inclusion. We iteratively created 42 base avatars (7 target races \(\times\) 2 genders \(\times\) 3 individuals) using a process that combined data-driven average facial features with extensive collaboration with representative stakeholders from each racial group. To address the longstanding issue of the lack of diversity in virtual designers and to empower diverse voices (Brockett et al., 2019), we adopted a participatory design method. This approach involved actively involving individuals (\(n=22\)) from diverse backgrounds, particularly different racial and ethnic identities, in the design process. By including these individuals as active participants, we aimed to ensure that their perspectives, experiences, and needs were considered and incorporated into the design of the avatars.
Once the avatars were created, we sought to evaluate their perception on a global scale. We then conducted a large online study (\(n=132\)) with participants from 33 countries, self-identifying as one of the seven represented races, to determine whether the race and gender of each avatar are recognizable, and therefore validated. We found that all Asian, Black, and White avatars were universally identified as their modeled race by all participants, while our AIAN, Hispanic, and MENA avatars were typically only identified by participants of the same race, indicating that participant race can bias perceptions of a virtual avatar's race. We have since modeled the 42 base avatars in five different outfits (casual, business, medical, military, and utility), yielding a total of 210 fully rigged avatars.
To foster diversity and inclusivity in virtual avatar research, we are making all of the avatars in our library freely available to the community as open source models, in addition to the avatars, we are also providing statistically validated labels for the race and gender of all 42 base avatars. Our models are available in FBX format, are compatible with previous libraries like Rocketbox (Rockett et al., 2019), and can be easily integrated into most game engines such as Unity and Unreal. Additionally, the avatars come equipped with facial blend shapes to enable researchers and developers to easily create dynamic facial expressions and lip-sync animations. All avatars, labels, and metadata can be found at our GitHub repository: [https://github.com/xrtlab/Validated-Avatar-Library-for-Inclusion-and-Diversity-VALID](https://github.com/xrtlab/Validated-Avatar-Library-for-Inclusion-and-Diversity-VALID).
This paper makes three primary contributions:
1. We provide 210 openly available, fully rigged, and perceptually validated avatars for the research community, with a focus on advancing diversity and inclusion.
2. Our diversity-represented user study sheds new light on the ways in which people's own racial identity can affect their perceptions of a virtual avatar's race. In our repository, we also include the agreement rates of all avatars, disaggregated by every participant race, which offers valuable insights into how individuals from different racial backgrounds perceive our avatars.
3. We describe a comprehensive process for creating, iterating, and validating a library of diverse virtual avatars. Our approach involved close collaboration with stakeholders and a commitment to transparency and rigor. This could serve as a model for other researchers seeking to create more inclusive and representative virtual experiences.
## 2. Related Work
In this section, we describe how virtual avatars are used within current research in order to highlight the need for diverse avatars. We conclude the section with a discussion on currently available resources used for virtual avatars and virtual agents.
### Effect of Avatar Race
Virtual avatars are widely used in research simulations such as training, education, and social psychology. The race of a virtual avatar is a crucial factor that can affect the outcomes of these studies. For example, research has shown that underrepresented students often prefer virtual instructors who share their ethnicity (Brockett et al., 2019; Brockett et al., 2019). Similarly, studies have suggested that designing a virtual teacher of the same race as inner-city youth can have a positive influence on them (Brockett et al., 2019), while a culturally relevant virtual instructor, such as an African-American instructor for African-American children, can improve academic achievement (Kirch et al., 2019).
The design of virtual avatars is especially important for minority or marginalized participants. Kim and Lim (Kim and Lim, 2019) reported that minority students who feel unsupported in traditional classrooms develop more positive attitudes towards avatar-based learning. In addition, children with autism spectrum disorder treat virtual avatars as real social partners (Brockett et al., 2019; Brockett et al., 2019). Therefore, to better meet the needs of all individuals participating in such studies, it is important for researchers to have access to diverse avatarars that participants can comfortably interact with. Diversity in virtual avatars is important not only for improving representation, but also for enhancing the effectiveness of simulations. Halan et al. (Halan et al., 2019) found that medical students who trained with virtual patients of a particular race demonstrated increased empathy towards real patients of that race. Similarly, Bickmore et al. (Bickmore et al., 2019) showed that interacting with a minority virtual avatar reduced racial biases in job hiring simulations.
These findings highlight the importance of diverse and inclusive virtual avatars in research simulations and emphasize the need for more comprehensive representation of different races. Access to a wide range of validated avatars through VALID will help to create more inclusive and representative simulations, and enable researchers to investigate the impact of avatar race or gender on participants' experiences. This will help improve the inclusivity of simulations and contribute towards addressing issues of bias.
### Implicit Racial Bias and Virtual Avatars
Avatars are becoming increasingly important in immersive applications, particularly in the realm of VR, where they are becoming ubiquitous (M
have demonstrated that embodying a darker-skinned avatar in front of a virtual mirror can reduce implicit racial biases (Safra et al., 2016; Safra et al., 2017; Safra et al., 2018; Safra et al., 2019), which are unconscious biases that can lead to discriminatory behavior (Safra et al., 2018). For instance, Salmanowitz et al. (Salmanowitz et al., 2019) found that a VR participant's implicit racial bias affects their willingness to convict a darker-skinned suspect based on inconclusive evidence. Similarly, Peck et al. (Peck et al., 2019) found that each participant's implicit racial bias was related to their nuanced head and hand motions in a firearm simulation. These foundational studies provide compelling evidence that embodying an avatar of a different race can affect implicit biases and further emphasize the need for diverse avatar resources.
Our study examines how participants perceive the race of diverse virtual avatar's race affects user interactions (e.g., (Brockett, 2017; Safra et al., 2018; Safra et al., 2019)), little research has been conducted on how individuals _actively perceive_ the race of virtual avatars. Setoh et al. (Setoh et al., 2019) note that racial identification can predict implicit racial bias, making it crucial to understand how people perceive the race of virtual avatars to further investigate these effects.
### Own-Race Bias
Own-race bias, also known as the "other-race effect," refers to the phenomenon in which individuals process the faces of their own race differently from those of other races (Safra et al., 2016; Safra et al., 2018; Safra et al., 2019; Safra et al., 2019). Studies have suggested that this bias can influence the way individuals categorize race. For example, MacLin and Malpass (MacLin and Malpass, 2018) found that Hispanic participants were more likely to categorize Hispanic faces as fitting their racial category than Black faces, and Blascovich et al. (Blascovich et al., 2018) observed that participants who strongly identify with their in-group are more accurate in identifying in-group members.
Although own-race bias has not yet been studied in the context of 3D virtual avatars, Saneyoshi et al. (Sanevoshi et al., 2019) recently discovered that it extends to the uncanny valley effect (Safra et al., 2019) for 2D computer-generated faces. Specifically, they found that Asian and European participants rated distorted faces of their own race as more unpleasant than those of other races. Building on this research, we extended the study of own-race bias to 3D virtual avatars and focused on race categorization rather than perceived pleasantness. Our study included avatars and participants from seven different races, providing insights into how a diverse user population may interact within equally diverse virtual worlds.
### Virtual Avatar Resources
There are numerous resources for creating virtual avatars. Artists can use 3D modeling tools, such as Autodesk 3ds Max4, Autodesk Maya5, Blender6, or Zbrush7 to manually model, texture, and rig virtual avatars. However, such work requires expertise in 3D modeling and character design, and is often a tedious process (Blender, 2017). On the other hand, parametric models, including freely available tools like MakeHuman8 and Autodesk Character Generator9, as well as commercially available ones such as Daz3D10, Poser11, and Reallusion Character Creator12, enable users to generate virtual avatars from predefined parameters, thereby significantly expediting the avatar generation process. Nonetheless, using these tools still requires learning a new program and time to customize each model, despite the absence of the artistic expertise needed for manual tools.
Footnote 1: [https://www.autodesk.com/products/3ds-max/overview](https://www.autodesk.com/products/3ds-max/overview)
Footnote 2: [https://www.autodesk.com/products/maya/overview](https://www.autodesk.com/products/maya/overview)
Footnote 3: [https://www.blender.org](https://www.blender.org)
Footnote 4: [https://www.maxon.net/en/zhrush](https://www.maxon.net/en/zhrush)
Another alternative to traditional modeling is to use scanning technologies, which can capture 3D models of real people. For instance, Shapiro et al. (Shapiro et al., 2018) and Waltemate et al. (Waltemate et al., 2018) used 3D cameras and photogrammetry, respectively, to capture 3D models of their users. Singular Inversions FaceGen Modeller13 has also been employed to generate 3D faces from user photos and then apply them to a general 3D avatar body (Brockett, 2017; Safra et al., 2018). However, scanning approaches require the ability to physically scan the user, limiting their use for certain applications, particularly remote ones.
Footnote 13: [http://www.makhumancommunity.org/](http://www.makhumancommunity.org/)
Most closely related to our goal of providing a free and open library of ready-to-use avatars is the Microsoft Rocketbox library (Blender, 2017) and its accompanying HeadBox (Safra et al., 2018) and MoveBox (Brockett, 2017) toolkits. Rocketbox provides a free set of 111 fully rigged adult avatars of various races and outfits. However, it falls short in terms of representation by not including any avatars of AIAN or NHPI descent. Additionally, the library offers only a limited number of Asian, Hispanic, and MENA avatars, excluding minority representations for some professions (e.g.,Rocketbox does not include any Asian medical avatars). Furthermore, none of the available avatar libraries have been validated by user perception studies to ensure their efficacy and inclusivity. Therefore, our VALID project aims to fill this gap by providing a free and validated library of diverse avatars.
## 3. Avatar Creation Procedure
This section outlines our iterative process for developing the VALID library, which includes 42 base avatars. We began by using data-driven averaged facial features to create our initial models. We then conducted interviews with representative volunteers to iteratively refine and modify the avatars based on their feedback.
### Initial Modeling
To ensure a broad diversity of people were represented in our library, we initially created 42 base avatars (7 target races \(\times\) 2 genders \(\times\) 3 individuals ) modeled after the seven racial groups recommended by the 2015 National Content Test Race and Ethnicity Analysis Report (Safra et al., 2018): AIAN, Asian, Black, Hispanic, MENA, NHPI, and White. We created 3 male and 3 female individuals for each race, resulting in a total of 6 individuals per race.
Preliminary models were based on averaged facial features of multiple photos selected from the 10k US Adult Faces Database (Brockett, 2017) and stock photos from Google for races missing from the database (e.g., AIAN, MENA, and NHPI). These photos were used as input to a face-averaging algorithm (Safra et al., 2018), which extracted average facial features for each race and gender pair. Using these averages as
a reference, a 3D artist recreated the average faces for each race and gender pair using Autodesk Character Generator (due to its generous licensing and right to freely edit and distribute generated models14) and Blender to make modifications not supported by Autodesk Character Generator (see Figure 1).
Footnote 14: [https://knowledge.autodesk.com/support/character-generator/learn-explore/cas/sidcarticles/sidcarticles/Character-Generator-Legal-Restrictions-and-Allowances-when-using-Character-Generator.html](https://knowledge.autodesk.com/support/character-generator/learn-explore/cas/sidcarticles/sidcarticles/Character-Generator-Legal-Restrictions-and-Allowances-when-using-Character-Generator.html)
### Iterative Improvements through Representative Interviews
After the preliminary avatars were created based on the facial averages, we worked closely with 2 to 4 volunteers of each represented race (see Table 1) to adjust the avatars through a series of Zoom meetings. This process ensured that all avatars were respectful and reduced the likelihood of harmful or stereotypical representations. Volunteers self identified their race and were recruited from university cultural clubs (e.g., Asian Student Association, Latinx Student Association), community organizations (e.g., Pacific Islanders Center), and email lists. We iteratively asked these volunteers for feedback on all avatars representing their race, showing them the model from three perspectives (see Figure 2). Volunteers were specifically asked to identify accurate features and suggest changes to be made. Once the changes were completed based on the feedback, we presented the updated avatars to the volunteers. This process was repeated until they approved the appearance of the avatars. For example, volunteers requested changes to facial features, such as:
* _"Many Native women I know have a softer face and jawline [than this avatar]."_ -AlAN Volunteer 3
* _"The nose bridge is too high and looks too European. Asians mostly have low nose bridges."_ -Asian Volunteer 2
* _"Middle Eastern women usually have wider, almond-shaped eyes."_ -MENA Volunteer 2
* _"The nose [for this avatar] should be a little bit thicker, less pointy, and more round."_ -NHPI Volunteer 1
Additionally, we modified hairstyles according to feedback:
* _[These hairstyles] look straighter and more Eurocentric. So I would choose [these facial features] and then do a natural [hair] texture."_ -Black Volunteer 1
* _"Usually the men have curly hair or their hair is cut short on the sides with the top showing."_ -NHPI Volunteer 1
Once the avatars were approved by their corresponding volunteer representatives, we conducted an online study to validate the race and gender of each avatar based on user perceptions.
## 4. Avatar Validation Study
We conducted an online, worldwide user study to determine whether the target race and gender of each avatar is recognizable and, therefore, validated. Participants were recruited from the online Prolific marketplace15, which is similar to Amazon Mechanical Turk. Prior research shows that Prolific has a pool of more diverse and honest participants (Sundhi et al., 2018) and has more transparency than Mechanical Turk (Sundhi et al., 2018). Since diversity was a core theme of our research, we chose Prolific to ensure that our participants would be diverse.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Race** & **Gender** & **Country** \\ \hline
**ALAN** & 2M, 1F & USA (Native American) (3) \\
**Asian** & 2M, 2F & China (1), USA (Chinese-American) (1), USA (Vientnamese-American) (2) \\
**Black** & 1M, 2F & USA (African-American) (3) \\
**Hispanic** & 2M, 1F & Mexico (1), USA (Mexican-American) (2) \\
**MENA** & 1M, 2F & Iran (2), Saudi Arabia (1) \\
**NHPI** & 1M, 2F & Samoa (2), USA (Native Hawaiian) (1) \\
**White** & 2M, 1F & USA (3) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Breakdown of our volunteer representatives by race, gender (male, female, or non-binary), and country.
Figure 1. An example of the creation of a 3D avatar using our methodology. 1) We select 4:7 faces from a database (Ashman et al., 2018) or stock photos. 2) We calculate the average face using WebMorph (Krishnan et al., 2018). 3) A 3D artist recreates the average face using modeling software. 4) The models are improved iteratively through recurrent consultation with representative volunteers.
### Procedure
The following procedure was reviewed and approved by our university Institutional Review Board (IRB). The study consisted of one online Qualtrics survey that lasted an average of 14 minutes. Each participant first completed a background survey that captured their self-identified demographics, including race, gender, and education. Afterwards, they were asked to familiarize themselves with the racial terms as defined by the U.S. Census Bureau research (Stenbury et al., 2015). Participants were then asked to categorize the 42 avatars by their perceived race and gender. Participants were shown only one avatar at a time and the order was randomized.
For each of the avatars, participants were shown three perspectives: a 45\({}^{\circ}\) left headshot, a direct or 0\({}^{\circ}\) headshot, and a 45\({}^{\circ}\) right headshot (see Figure 2). Avatars were shown from the shoulders up and were dressed in a plain gray shirt. The images were rendered in Unity using the standard diffuse shader and renderer. The avatars were illuminated by a soft white (#FFFF5) directional light with an intensity of 1.0, and light gray (#7F7FF) was used for the background. Participants were asked to select all races that each avatar could represent: "American Indian or Alaskan Native", "Asian", "Black or African American,"Hispanic, Latino, or Spanish", "Middle Eastern or North African," Native Hawaiian or Pacific Islander, "White", or "Other". "Other" included an optional textbook if a participant wanted to be specific. We allowed participants to select multiple categories according to the U.S. Census Bureau's recommendations for surveying race (Stenbury et al., 2015). For gender, participants were able to select "Male", "Female", or "Non-binary". Participants were paid $5.00 via Prolific for completing the study.
### Participants
A total of 132 participants (65 male, 63 female, 4 non-binary) from 33 different countries were recruited to take part in the study. We aimed to ensure a diverse representation of perspectives by balancing participants by race and gender. Table 2 provides a breakdown of our participants by race, gender, and country. Despite multiple recruitment attempts, including targeted solicitations via Prolific, we had difficulty recruiting NHPI participants. It is important to note that we excluded volunteers who had previously assisted with modeling the avatars from participating in the validation study to avoid potentially overfitting their own biases.
### Data Analysis and Labeling Approach
To validate the racial identification of our virtual avatars, we used Cochran's Q test (Cochran, 2016), which allowed us to analyze any significant differences among the selected race categories. This approach was necessary since our survey format allowed participants to select more than one race category for each avatar, following the U.S. Census Bureau's research recommendations (Stenbury et al., 2015). Since the Chi-squared goodness of fit test requires mutually exclusive categories, we were unable to use it in our analysis. Furthermore, since our data was dichotomous, a repeated-measures analysis of variance (ANOVA) was not appropriate. Therefore, Cochran's Q test was the most appropriate statistical analysis method for our survey data.
We used a rigorous statistical approach to assign race and gender labels to each avatar. First, we conducted the Cochran's Q test across all participants (\(n=132\)) at a 95% confidence level to identify significant differences in the participants' responses. If the test indicated significant differences, we performed pairwise comparisons between each race using Dunn's test to determine which races were significantly different.
For each avatar, we assigned a race label if the race was selected by the majority of participants (i.e., over 50% of participants selected it) and if the race was selected significantly more than other race choices and not significantly less than any other race. This approach resulted in a single race label for most avatars, but some avatars were assigned multiple race labels due to multiple races being selected significantly more than all other races. If no race was selected significantly more than the majority, then we categorized the avatar as "Ambiguous". We followed a similar procedure for assigning gender labels.
To account for the possibility that the race of the participant might influence their perception of virtual race, we also assigned labels based on same-race participants. This involved using the same procedure for assigning labels as described above, except based only on the selections of participants who identified as the same race as the avatar. This also allows future researchers to have the flexibility to use the labels from all study participants for studies focused on individuals from diverse racial backgrounds or to use the labels from participants of the same race for studies targeting specific racial groups.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Race** & **Gender** & **Country** \\ \hline
**AIAN** & 9M, 10F, 1NB & U.S. (14), Canada (3), Chile (1), Mexico (1), Cambodia (1) \\
**Asian** & 10M, 10F & UK. (5), Canada (3), South Africa (3), India (2), Indonesia (2), China (1), France (1), Germany (1), Italy (1), Malaysia (1) \\
**Black** & 10M, 9F, 1NB & South Africa (16), Nigeria (2), Swaziland (1), UK. (1) \\ \hline
**Hispanic** & 10M, 9F, 1NB & Mexico (15), Chile (3), Portugal (2) \\
**MENA** & 10M, 10F & Israel (9), Lebanon (3), Jordan (2), Bahrain (1), Egypt (1), Iran (1), Iraq (1) \\
**NHPI** & 7M, 5F & New Zealand (Maori) (8), Samoa (2), Fiji (1), U.S. (1) \\
**White** & 9M, 10F, 1NB & Portugal (5), Italy (5), Poland (3), Mexico (2), Belgium (1), Greece (1), Ireland (1), U.S. (1) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Breakdown of our validation study’s participants by race, gender (male, female, or non-binary), and country.
Figure 2. An example of how each avatar was presented to participants during our validation study.
## 5. Results
### Validated Avatar Labels
Table 3 summarizes our results and labels for all 42 base avatars across all participants and for same-race participants.
#### 5.1.1. Race and Gender Labels
Asian, Black, and White avatars were correctly identified as their intended race across all participants, while most of the remaining avatars were accurately identified by same-race participants (see Table 3 for all and same-race agreement rates). Therefore, we observed some differences in identification rates based on the race of the participants, highlighting the potential impact of own-race bias on the perception of virtual avatars. Notably, there were no significant differences in gender identification rates based on participant race, indicating that all avatars were correctly perceived as their intended gender by all participants, regardless of their racial background.
#### 5.1.2. Naming Convention
If an avatar was identified as its intended race by corresponding same-race participants, we named it after that race. For instance, the avatar Hispanic_M_2 was labeled as White by all participants. However, our Hispanic participants perceived it as solely Hispanic. Hence, we left the original name. However, if an avatar was labeled as "Ambiguous" or as a different race by same-race participants, we added an X at the beginning of its name to indicate that it was not validated. Avatars were also labeled by their identified gender ("M" or "F").
### Other-Race vs. Same-Race Perception
To further examine how participant race affected perception of virtual avatar race, we additionally analyzed the data by separating same-race and other-race agreement rates. In effect, we separated the selections of the participants who were the same race as the avatar modeled and those who were not.
#### 5.2.1. Difference in Agreement Rates
Figure 3 displays the difference in agreement rates between same-race and other-race. Figure 3 shows that several avatars were strongly identified by both other-race and same-race participants. In particular, all Asian, Black, and White avatars were perceived as their intended race with high agreement rates by both same-race and other-race participants (over 90% agreement for all but one). However, some avatars were only identified by participants of the same race as the avatar. For example, our analysis of the agreement rates for different racial groups revealed interesting trends. For instance, non-Hispanic participants had an average agreement rate of 54.5% for Hispanic avatars, while Hispanic participants had a much higher average agreement of 75.0%. Similar patterns were observed for AIAN (57.8% other-race, 75.0% same-race) and MENA (40.4% other-race, 68.0% same-race) avatars.
#### 5.2.2. Perceived Race Clusters
To gain deeper insights into how participants perceived the avatars' races, we employed Principle Component Analysis (PCA) to reduce the agreement rates of each of the 42 base avatars down to two dimensions. Next, we performed K-means clustering (Srivastava et al., 2017) on the resulting two-dimensional data to group the avatars based on their perceived race. We optimized the number of clusters using the elbow method and distortion scores (Brandt et al., 2016). We applied this technique to both other-race and same-race agreement rates to determine whether there were any differences in the clustering based on participant race. By visualizing the clusters, we aimed to better understand the differences in how participants perceived the avatars' races.
Figure 4 shows that Asian, Black, and White avatars were perceived consistently by all participants, with clearly defined clusters. However, there was more confusion in perceiving AIAN, Hispanic, MENA, and NHPI avatars, which clustered closer together. Same-race participants had less overlap and more-accurately perceived these avatars, with more separation between them. For example, the Hispanic and MENA avatars were in separate clusters for same-race participants, except for one avatar (Hispanic_F_2). On the other hand, the Hispanic and MENA avatars were entirely clustered together for other-race participants.
## 6. Discussion
In this section, we discuss the validation of our avatars. Specifically, we examine the extent to which each avatar was correctly identified as its intended race and the variability in identification across different participant groups. Additionally, we discuss the implications of our results for virtual avatar research, highlighting the importance of considering the potential impact of own-race bias on avatar race perception. Finally, we describe the potential future impact of our avatar library in the community, including how it can be used to promote diversity and inclusion.
### Race Identification
#### 6.1.1. Universally Identified Avatars
We found that our Asian, Black, and White avatars were recognized by all participants with high agreement rates. This suggests that these avatars can be a valuable tool for researchers seeking to create virtual humans that can be easily identified by individuals from different racial backgrounds.
Our results may be due to perceptual expertise or familiarity with other-race faces, as proposed by Civile et al. (Civile et al., 2017). We hypothesize that this familiarity could be explained by the prevalence of these racial groups in global media and pop culture. For example, White cast members were the most represented in popular Hollywood movies over the last decade, followed by Black cast members (Civile et al., 2017). Since Hollywood movies have a dominant share in the global film industry (Civile et al., 2017), people may be more familiar with characters that are prevalent in these films. Additionally, East Asian media culture has become widely popular worldwide over the past few decades (Srivastava et al., 2017; Srivastava et al., 2017). Phenomena like "The Hallyu Wave" and "Cool Japan" (Civile et al., 2017) have enabled East Asian films, dramas, and pop music to gain a global following. As people may often encounter these racial groups in media, this familiarity may have facilitated their recognition of these avatars.
#### 6.1.2. Same-Race Identified Avatars
As expected, some avatars were only identified by participants of the same race as the avatar, consistent with the own-race bias effect. For example, as seen in Table 3, the Hispanic avatars received mixed ratings of White and Hispanic across all participants, but most were perceived as solely Hispanic by Hispanic-only participants. Similarly, only one MENA avatar was perceived as MENA by all participants, while five were perceived as MENA by MENA-only participants. These results suggest that participants' own-race bias, a well-known phenomenon in psychology, may also affect their perception of virtual avatars. The
findings point to the importance of considering participants' race when using virtual avatars in research or applications that require accurate representation of different racial groups.
#### 6.1.3. Ambiguous Awatars
Several avatars in our library were perceived ambiguously by all participants and only same-race participants, and therefore labeled as such (see Table 3 for details). Identifying the reason for these avatars' lack of clear identification is not straightforward, and multiple factors could be at play. For instance, the two ambiguous AIAN avatars were the only ones with short hairstyles, which may have impacted their identification as AIAN. Long hair carries cultural and spiritual significance in many AIAN tribes (Krishnan et al., 2017), and some participants may have perceived the avatars as non-AIAN as a result, even among AIAN participants.
The validation of our NHPI avatars was limited, possibly due to the low number of NHPI participants (\(n=12\)) in our study, despite our targeted recruitment efforts. As a consequence, most of the NHPI avatars were not validated by NHPI participants, including the lack of validation for any female NHPI avatars. Another potential reason for this lack of validation is that the majority of our NHPI
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Avaratar** & **Race (All)** & **Agreement** & **Race (Same-Race)** & **Agreement** & **Gender (All)** & **Agreement** \\ \hline
[MISSING_PAGE_POST]
\_1** & Hispanic & 0.64 & Hispanic & 0.80 & Female & 0.95 \\
**Hispanic\_F\_2** & White & 0.70 & Hispanic/White & 0.55/0.60 & Female & 0.95 \\
**Hispanic\_F\_3** & Hispanic/White & 0.59/0.52 & Hispanic & 0.75 & Female & 0.93 \\
**MENA\_M\_1** & White & 0.56 & MENA & 0.70 & Male & 0.99 \\
**MENA\_M\_2** & White & 0.64 & MENA/White & 0.65/0.60 & Male & 1.00 \\
**MENA\_M\_3** & MENA/White & 0.55/0.65 & MENA/White & 0.75/0.55 & Male & 0.98 \\
**MENA\_F\_1** & White & 0.58 & MENA/White & 0.70/0.60 & Female & 0.98 \\
**MENA\_F\_2** & White & 0.60 & MENA & 0.60 & Female & 0.98 \\
**NHPI\_M\_1** & NHPI & 0.52 & NHPI & 0.58 & Male & 0.98 \\
**NHPI\_M\_2** & Hispanic & 0.65 & NHPI & 0.67 & Male & 1.00 \\
**White\_M\_1** & White & 0.96 & White & 0.95 & Male & 0.99 \\
**White\_M\_2** & White & 0.98 & White & 0.95 & Male & 1.00 \\
**White\_M\_3** & White & 0.93 & White & 0.90 & Male & 0.99 \\
**White\_F\_1** & White & 0.94 & White & 0.95 & Female & 0.97 \\
**White\_F\_2** & White & 0.96 & White & 0.95 & Female & 0.98 \\
**White\_F\_3** & White & 0.94 & White & 0.95 & Female & 0.99 \\
**X\_AIAN\_M\_1** & Hispanic & 0.64 & Hispanic & 0.75 & Male & 0.99 \\
**X\_AIAN\_F\_1** & Hispanic & 0.54 & Hispanic & 0.45 & Female & 0.86 \\
**X\_MEA\_F\_1** & Ambiguous & N/A & Ambiguous & N/A & Female & 0.98 \\
**X\_NHPI\_M\_1** & Hispanic & 0.55 & Asian & 0.58 & Male & 0.98 \\
**X\_NHPI\_F\_1** & Hispanic & 0.55 & Ambiguous & N/A & Female & 0.92 \\
**X\_NHPI\_F\_2** & Hispanic & 0.58 & Ambiguous & N/A & Female & 0.95 \\
**X\_NHPI\_F\_3** & NHPI & 0.52 & Ambiguous & N/A & Female & 0.92 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Assigned labels for all 42 base avatars. “All” indicates that the label was identified by all 132 participants, while “Same-Race” only includes the data of participants who identify as the race that the avatar was modeled for. Agreement labels were calculated as the percentage of participants who perceived an avatar to represent a race or gender.
participants identified themselves as New Zealand Maori, whereas our avatars were developed with the help of Samoaan and Native Hawaiian volunteer representatives. Therefore, it is possible that our NHPI avatars are representative of some NHPI cultures, but not New Zealand Maori. In future studies, expanding recruitment efforts for both interview volunteers and study participants will be crucial, despite the challenges involved in doing so. For example, future studies may need to compensate NHPI participants more than participants of other races.
### Implications for Virtual Avatars
Our study provides valuable insights for virtual avatar applications and research. Our findings indicate that human behavior in race categorization can apply to virtual avatars, which has notable implications for interactions in virtual experiences. Kawakami et al. (Kawakami et al., 2018) suggest that in-group and out-group categorization can lead to stereotyping, social judgments, and group-based evaluations. Therefore, designers and developers should be aware of this and take necessary steps to mitigate unintended consequences in virtual experiences. For example, regulating codes of conduct (Kawakami et al., 2018) can help to improve interracial interactions in VR.
Figure 3. Confusion matrix heatmap of agreement rates for the 42 base avatars by separated by other-race participants and same-race participants (i.e., participants of a different or same race as the avatar). Agreement rates were calculated as the percentage of participants who perceived an avatar to represent a race or gender.
Interestingly, our study also replicated a nuanced finding from more recent psychology research on the perception of ambiguous avatars (Steintein et al., 2017). As seen in Table 3, most of the misidentified avatars were identified as Hispanic by all participants. Similarly, Nicolas et al. (Nicolas et al., 2017) recently found that participants classify radially ambiguous photos as Hispanic or MENA, regardless of their parent ethnicities. We believe that this effect extended to our virtual avatars.
### An Open Library of Validated Avatars
As a contribution to the research community, we are providing open access to our virtual avatar library, which includes all 210 fully rigged avatars, along with validated labels for each avatar's race and gender. Our library features avatars of seven different races, providing a diverse selection for researchers to use in their studies. The validated labels can facilitate research on the impact of avatar race, and researchers can choose to use the labels for studies aimed at individuals from different racial backgrounds or same-race labels for specific study populations.
The _Virtual Avatar Library for Inclusion and Diversity_ (_VALID_) provides researchers and developers with a diverse set of fully rigged avatars suitable for various scenarios such as casual, business, medical, military, and utility. Each avatar comes with 65 facial blend shapes, enabling dynamic facial expressions (see Figure 5). The library is readily available for download and can be used in popular game engines like Unity or Unreal. Although this is the first iteration of the library, we plan to update it by adding more professions and outfits soon. In addition, the library can be used for a wide range of research purposes, including social psychology simulations and educational applications.
Figure 4. Clustered scatterplots of each avatar’s relation to one another based on Principle Component Analysis and K-means clustering for other-race and same-race participant identifications. The Voronoi analysis shows the borders of the clusters where each category was assigned. Each avatar is color coded by its validated label,
Figure 5. Images of the skeleton and facial blend shapes included with our avatars.
### Limitations and Future Work
We recognize that our VALID avatar library is only a small step towards achieving greater diversity and inclusion in avatar resources. We acknowledge that the representation of each demographic is limited and plan to expand the diversity within each group by creating new avatars. For example, our Asian avatars are modeled after East Asians, but we plan to expand VALID to include South Asian and Southeast Asian avatars as well. Our Hispanic representatives have pointed out the need for more diverse Hispanic avatars, including varying skin tones to represent different South American populations, such as Mexican and Cuban. Additionally, our NHPI representatives have suggested the inclusion of tattoos, which hold cultural significance for some NHPI communities, could improve the identifiability of our NHPI avatars, in addition to improving our NHPI recruitment methods. Any future updates to the library will undergo the same rigorous creation, iteration, and validation process as the current avatars.
While our first iteration of the library focused on diversity in terms of race, we realize that the avatars mostly represent young and fit adults, which does not reflect all types of people. In the future, we plan to update the library with a diversity of body types that include different body mass index (BMI) representations and ages. Including avatars with different BMI representations is not only more inclusive, but can also be useful for studies targeting physical activity, food regulation, and therapy (Han et al., 2017). Likewise, we plan to include shaders and bump maps (Han et al., 2017) that can age any given avatar by creating realistic wrinkles and skin folds, further improving the diversity and inclusivity of VALID
Another limitation of the current work is that our library includes only male and female representations. In future updates, we plan to include non-binary and anrogyous avatars. Currently, there are not many androgyous models that are freely available. However, they can be an area of important study. For example, previous studies found that androgyous avatars reduce gender bias and stereotypical assumptions in virtual agents (Han et al., 2017) and improve student attitudes (Han et al., 2017). Thus, we plan to include these avatars in a future update by following Nag et al.'s (Han et al., 2017) guidelines for creating androgyous virtual humans.
Our study, while diverse in terms of race and country, is not representative of everyone. We recruited participants through the online platform Prolific, which is known for its increased diversity compared to other crowdsourcing platforms such as Mechanical Turk. However, due to the online nature of the platform, we primarily recruited younger adults. It is possible that perceptions of our avatars may differ among other age groups, such as children or older adults. Therefore, it is important to broaden recruitment efforts by exploring alternative platforms and recruitment strategies that may be more effective in reaching a wider range of participants. Future studies could also consider conducting in-person studies or focus groups to gather additional insights into avatar perception.
## 7. Conclusion
We have introduced a new virtual avatar library comprised of 210 fully rigged avatars with diverse professions and outfits, available for free. Our library aims to promote diversity and inclusion by creating equitable representation of seven races across various professions. We designed 42 base avatars using data-driven facial averages and collaborated with volunteer representatives of each ethnicity. A large validation study involving participants from around the world was conducted to obtain validated labels and metadata for the perceived race and gender of each avatar. Additionally, we offer a comprehensive process for creating, iterating, and validating diverse avatars to aid other researchers in creating similarly validated avatars.
Our validation study revealed that the majority of avatars were accurately perceived as the race they were modeled for. However, we observed that some avatars, such as the Hispanic and MENA avatars, were only validated as such by participants who identified as Hispanic or MENA, respectively. This finding suggests that the perception of virtual avatars may be influenced by own-race bias or the other-race effect, as described in the psychology literature. Moving forward, we plan to expand the library to include additional races, professions, body types, age ranges, and gender representations to further improve diversity and inclusion.
|
2308.00186 | Learning Complex Motion Plans using Neural ODEs with Safety and
Stability Guarantees | We propose a Dynamical System (DS) approach to learn complex, possibly
periodic motion plans from kinesthetic demonstrations using Neural Ordinary
Differential Equations (NODE). To ensure reactivity and robustness to
disturbances, we propose a novel approach that selects a target point at each
time step for the robot to follow, by combining tools from control theory and
the target trajectory generated by the learned NODE. A correction term to the
NODE model is computed online by solving a quadratic program that guarantees
stability and safety using control Lyapunov functions and control barrier
functions, respectively. Our approach outperforms baseline DS learning
techniques on the LASA handwriting dataset and complex periodic trajectories.
It is also validated on the Franka Emika robot arm to produce stable motions
for wiping and stirring tasks that do not have a single attractor, while being
robust to perturbations and safe around humans and obstacles. | Farhad Nawaz, Tianyu Li, Nikolai Matni, Nadia Figueroa | 2023-07-31T22:50:14Z | http://arxiv.org/abs/2308.00186v3 | # Learning Complex Motion Plans using Neural ODEs with Safety and Stability Guarantees
###### Abstract
We propose a Dynamical System (DS) approach to learn complex, possibly periodic motion plans from kinesthetic demonstrations using Neural Ordinary Differential Equations (NODE). To ensure reactivity and robustness to disturbances, we propose a novel approach that selects a target point at each time step for the robot to follow, by combining tools from control theory and the target trajectory generated by the learned NODE. A correction term to the NODE model is computed online by solving a quadratic program that guarantees stability and safety using control Lyapunov functions and control barrier functions, respectively. Our approach outperforms baseline DS learning techniques on the LASA handwriting dataset and complex periodic trajectories. It is also validated on the Franka Emika robot arm to produce stable motions for wiping and stirring tasks that do not have a single attractor, while being robust to perturbations and safe around humans and obstacles.
## 1 Introduction
Learning from Demonstrations (LfD) is a framework that enables transfer of skills to robots from observations of desired tasks (Khansari-Zadeh and Billard, 2011; Ijspeert et al., 2013; Yang et al., 2022). Typically, the observations are robot trajectories that are demonstrated through kinesthetic teaching, passively guiding the robot through the nominal motion to avoid the correspondence problem (Akgun and Subramanian, 2011). In such a framework, it is essential to learn motion plans from as few demonstrations as possible, while still providing required robustness, safety, and reactivity in a dynamic environment. While there are multiple approaches to represent the motion, we focus on Dynamical Systems (DS) based formulation (Billard et al., 2022). DS based approaches have been shown to be particularly useful in Human-Robot Interaction (HRI) scenarios (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2022; Khansari-Zadeh and Billard, 2014), where the robot inherently adapts to changes in the environment and can be compliant to human interactions, instead of following a stiff time-dependent reference motion that encodes the task.
**Problem Formulation:** A DS based motion plan for a robotic manipulator is defined in terms of the robot state variable \(x\in\mathbb{R}^{d}\), where \(x\) could be the robot's end-effector Cartesian state or joint state. The motion planning layer is formulated as an autonomous DS
\[\dot{x}=f(x), \tag{1}\]
where \(f(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a nonlinear continuous function. The training data is \(D\) demonstrations from kinesthetic teaching: \(\mathcal{D}:=\{x_{i}(t_{1}),x_{i}(t_{2}),\ldots,x_{i}(t_{T})\}_{i=1}^{D}\), where, \(x_{i}(t_{k})\) is the state of the robot at time \(t_{k}\), for the \(i^{th}\) demonstration. The discrete points in each demonstration are sampled uniformly at time \(\{t_{1},t_{2},\ldots,t_{T}\}\). We assume that the training data trajectories \(\mathcal{D}\) approximate an _unknown nominal target trajectory_\(z^{*}(t)\) that encodes the task of the robot such as wiping, stirring, scooping, etc. Our aim is to design a vector field \(f(\cdot)\)
using the demonstrations \(\mathcal{D}\) such that \(x(t)\) follows the target trajectory \(z^{*}(t)\). Previous work (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2018) in the DS-based motion planning framework has considered convergence only to a single target. In this paper, we consider convergence to a trajectory \(z^{*}(t)\) that can represent more complex, e.g., highly-nonlinear and periodic, motions. Under nominal circumstances, i.e., in the absence of disturbances or obstacles, the target trajectory \(z^{*}(t)\) should be viewed as the reference for the low level controller to track. However, during deployment, the robot might not always follow the target trajectory because of tracking errors, disturbances, obstacles, etc. For example, there might be unanticipated perturbations during task execution which is generally not present while teaching a task for the robot. Consider the scenario in Fig. 0(a), where the target trajectory encodes a wiping task and the vector field is a unconstrained DS model (20) learned from demonstrations \(\mathcal{D}\). As shown in Fig. 0(a), if the robot is perturbed by a disturbance during deployment to a region where there is no training data, the learned model commands the robot to a spurious attractor. However, the desired behaviour is to continue tracking the target trajectory so that the robot wipes the desired space as shown in Fig. 0(b). Ensuring robustness to perturbations is critical for deploying robots in human-centric environments, as disturbances can arise due to obstacles unseen in demonstrations and intentional or adversarial disturbances caused by humans (Billard et al., 2022; Wang et al., 2022). This leads to our formal problem statement.
**Problem 1**: _Given a set of training data \(\mathcal{D}:=\{x_{i}(t_{1}),x_{i}(t_{2}),\ldots,x_{i}(t_{T})\}_{i=1}^{D}\), design a vector field \(f(\cdot)\) for the dynamical system (20), such that it generates safe and stable motion plans at deployment for scenarios possibly not seen in the demonstrations, while ensuring that the robot's trajectory \(x(t)\) converges to the target trajectory \(z^{*}(t)\)._
**Related work:** LfD is a widely used framework to learn the underlying motion policy of a task (Argall et al., 2009). Inverse Reinforcement Learning (IRL) and Behavior Cloning (BC) are popular methodologies that have been used to imitate motion from human demonstrations (Abbeel and Ng, 2004; Priess et al., 2014; Osa et al., 2018). In IRL, the underlying objective function of a task is learned and an optimization problem is solved to generate the robot motion. BC learns the state-action distribution of the task dynamics from demonstrations. IRL and BC typically require the demonstrator to explore the task space for learning the policy. Algorithms such as DAGGER (Ross et al., 2011) rely on online data collection for exploration. Exploration of the state space using large amounts of data may not be feasible for HRI
Figure 1: An illustrative example of a spurious attractor when the robot’s path is guided by a DS-based motion plan in the presence of a disturbance is shown in (a) using NODE to encode the motion plan and (b) using the corrected CLF-NODE approach proposed in this work.
applications, especially when the demonstrator is a human. We base our approach on DS-based LfD that has been shown to model stable motion plans from very few demonstrations (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2022). One of the earliest work in DS-based LfD is called Dynamic Movement Primitives (Schaal, 2006) which estimates a nonlinear autonomous DS for various robotic applications such as walking, pouring, and grasping (Nakanishi et al., 2004; Ude et al., 2010). Stable Estimator of Dynamical Systems (SEDS) (Khansari-Zadeh and Billard, 2011) is a LfD method that learns globally stable dynamical systems with respect to a goal point using Gaussian Mixture Models (GMMs) and quadratic Lyapunov functions. An important limitation of SEDS is that it can only model trajectories whose distance to the target decrease monotonically in time. A method based on SEDS is presented in (Figueroa and Billard, 2018) via a Linear Parameter Varying (LPV) re-formulation of the model that learns more complex trajectories than SEDS. In (Ravichandar et al., 2017), contraction analysis is used to derive stability conditions on the learned trajectories. Recurrent Neural Networks (RNNs) have been used to model discrete motions (Reinhart and Steil, 2011; Lukosevicius and Jaeger, 2009). In (Chen et al., 2020), a Neural Network (NN) is used to learn the vector field \(f(\cdot)\) in (20) and Lyapunov stability is imposed by a sampling algorithm that identifies unstable regions in a predefined workspace. In this work, we leverage the rich model class of NNs to capture the invariant features of the target trajectory, but use Neural Ordinary Differential Equations (Chen et al., 2019) instead of the standard regression model (Chen et al., 2020).
Figure 2: The modular control flow of our proposed pipeline using the CLF-CBF NODE approach. The state of the robot is \(x\), the low level control input are the joint torques \(\tau\), and the desired velocity in state space is \(\dot{x}_{ref}\).
### Proposed Approach
We parameterize the vector field (20) of the motion plan as
\[\dot{x}=\hat{f}(x)+u(x), \tag{2}\]
where, \(\hat{f}(x)\) is used to encode the nominal system behavior, and \(u(x)\) is used to enforce safety and disturbance rejection properties. We learn the nominal system \(\hat{f}(\cdot)\) from demonstrations \(\mathcal{D}\), and compute a correction term \(u(x)\) based on control theoretic tools so that the goals of stability and safety are met in a composable, modular, and data efficient way. Similar to our objectives, the method proposed in (Figueroa and Billard, 2022) generates motion plans that not only converge to a global goal, but also has local stiff behaviours in regions around the target trajectory. Yet, they still lack in representing complex motions with high curvature and partial divergence. We use a modular approach similar to (Khansari-Zadeh and Billard, 2014), but, we define a CLF with respect to a time-varying target trajectory rather than a single goal point as assumed in (Khansari-Zadeh and Billard, 2014). Several DS-based obstacle avoidance methods (Hoffmann et al., 2009; Khansari-Zadeh and Billard, 2012) have been proposed that modulate the nominal dynamics of the motion by introducing a factor in the motion equation. Control Barrier Functions (CBFs) (Robey et al., 2020; Ames et al., 2019) are widely used to enforce safety of dynamical systems, and we adopt them in this work to generate safe motion plans.
A schematic of the proposed control flow is presented in Fig. 2. The blue blocks represent the offline learning component and the green blocks are the online computation modules. We use a neural network parameterized model \(\hat{f}\) that we learn from demonstrations \(\mathcal{D}\), but any other model class could be used within our proposed framework. Starting from an initial condition, we integrate \(\hat{f}(\cdot)\) for the same time span of the task given in demonstrations to generate a target trajectory \(x^{*}(t)\) that approximates the unknown nominal target trajectory \(z^{*}(t)\). At deployment, the actual states of the robot \(x\) are observed by our motion plan at every time \(t\). Given the current state \(x\), our architecture then chooses the target point \(\pi(x)\) that the robot should follow at time \(t\) using the pre-computed target trajectory \(x^{*}(t)\). We estimate the nominal desired velocity \(\hat{f}(x)\) using our learnt model. However, as illustrated in Fig. 8, the generated motion plan from \(\hat{f}\) is neither guaranteed to be stable nor safe. Hence, we compute a _virtual control input_\(u(x)\) as an additive correction term that generates the reference motion plan \(\dot{x}\) using (2) so that the trajectory generated by \(\dot{x}\) converges to the target trajectory even in the presence of disturbances and unsafe regions such as obstacles. We denote the reference velocity for the low level controller as \(\dot{x}_{ref}\), which in general may be different from the real velocity \(\dot{x}\) of the robot. The reference velocity \(\dot{x}_{ref}\) is given as input to the impedance controller (Kronander and Billard, 2016) that computes the low level control input \(\tau\) (joint torques) for the physical robotic system. We emphasize that the virtual control input \(u(x)\) is different from the low-level control inputs \(\tau\) given in Fig. 2, and is a component of the motion planning DS (2).
**Contributions** Our contributions are given below.
1. We propose a Neural ODE (NODE) based offline learning methodology that captures the invariant features of complex nonlinear and periodic nominal motion plans, such as wiping and stirring, using only a few demonstrations from kinesthetic teaching.
2. We generate a DS-based reactive motion policy for HRI scenarios by solving an efficient Quadratic Program (QP) online at high frequency that integrates CLFs and CBFs as constraints on the nominal NODE-based motion plan to guarantee stability and safety, respectively.
3. We define a novel look-ahead strategy that chooses a target point at every time step for the robot to follow, enabling tracking of a time-varying target trajectory instead of a single target point.
4. We show significant performance improvements over existing methods on the LASA handwriting data set, and validate that our approach enables complex nonlinear and periodic motions for compliant robot manipulation on the Franka Emika robot arm.
Learning Nominal Motion Plans using Neural ODEs
We propose a neural network parameterized function class to learn the dynamics of the motion offline from demonstrations. Although existing work (Figueroa and Billard, 2022) has provided guarantees on stable learning of dynamical systems using mixture of linear models that converge to a single target, we aim to learn more complex trajectories that also generalize across the task space and scale to higher dimensions. Since neural networks have demonstrated high capability for accurate time-series modeling (Chen et al., 2020), we base our approach on Neural ODEs (Chen et al., 2019), which are the continuous limit (in depth) of ResNets (Haber and Ruthotto, 2017). We parameterize our models of nominal target trajectories as:
\[\frac{d\hat{x}(t)}{dt}=f_{\theta}(\hat{x}(t)), \tag{3}\]
where \(f_{\theta}(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a neural network with parameters \(\theta\), and \(\hat{x}(t)\in\mathbb{R}^{d}\) is the state variable predicted by \(f_{\theta}(\cdot)\) at time \(t\). In the forward pass, given the integration time interval \([a,b]\) and an initial point \(\hat{x}(a)\), the model outputs the trajectory \(\hat{x}(t)\) for \(t\in[a,b]\). The trajectory is obtained by solving the differential equation in (19) using a general-purpose differential equation solver based on fifth-order Runge-Kutta (Butcher, 1996). We set \(f_{\theta}(\cdot)\) to be a Multi-Layer Perceptron (MLP), where the inputs and outputs are in \(\mathbb{R}^{d}\) so that the trajectory predictions evolve in the relevant state space.
We consider the supervised learning setup with training data \(\mathcal{D}\). The predictions of the state \(\hat{x}_{i}(t_{k})\) by the model \(f_{\theta}\) are obtained via integration:
\[\hat{x}_{i}(t_{k+1})=\hat{x}_{i}(t_{k})+\int_{t_{k}}^{t_{k+1}}f_{ \theta}(\hat{x}_{i}(s))\ ds,\forall\ k\in\{1,2,\ldots,T-1\} \tag{4}\]
where we set \(\hat{x}_{i}(t_{1})=x_{i}(t_{1}),\ \forall\ i\in\{1,2,\ldots,D\}\). We apply empirical risk minimization with loss
\[\min_{\theta}\frac{1}{DT}\sum_{i=1}^{D}\sum_{k=1}^{T}\Bigl{\|}x_{i }(t_{k})-\hat{x}_{i}(t_{k})\Bigr{\|}_{2}^{2}, \tag{5}\]
to learn the parameters \(\theta\), where the predictions \(\hat{x}_{i}(t_{k})\) are generated as in (4).1 In contrast to previous work (Figueroa and Billard, 2022; Khansari-Zadeh and Billard, 2014) which learns a map \(\hat{f}(x(t))\) using labeled data \(\{x(t),\dot{x}(t)\}\), we do not assume access to velocity measurements as they are often not easily collected and/or noisy (Purwar et al., 2008; Xiao et al., 2020). Further, noisy velocity measurements might cause the map to overfit and lead to aggressive trajectories at inference that are not desirable for the low-level controller. From our results presented in Fig. 8 and Section 4, we observe that the NODE model generates smooth trajectories utilizing only state variables \(x(t)\) to learn \(f_{\theta}\) and not their derivatives \(\dot{x}(t)\). While such a NODE-based vector field will behave reliably on and near the training data, if there are unanticipated disturbances or obstacles during deployment, the robot might deviate to regions of the state-space where the learned vector field is unreliable. Next, we present a methodology that computes a correction term to ensure that the robot robustly and safely tracks the learned target trajectory.
Footnote 1: A binomial checkpoint method is used during training for solving (21) as implemented in (Kidger, 2022).
## 3 Enforcing Stability and Safety via Virtual Control Inputs
We begin with a review of control theoretic tools that provide sufficient conditions for stability and safety of dynamical systems. We consider a nonlinear control affine dynamical system
\[\dot{x}=g(x)+h(x)u, \tag{6}\]
where, \(x\in\mathcal{X}\subset\mathbb{R}^{d}\) and \(u\in\mathcal{U}\subset\mathbb{R}^{m}\) are the set of allowable states and control inputs, respectively. The DS-based motion plan (2) is a nonlinear control affine DS with \(g(x)=\hat{f}(x)\) and \(h(x)=1\).
### Control Lyapunov Functions
We consider the objective of asymptotically stabilizing the DS (6). Without loss of generality, we focus on stabilizing the system to the origin: \(x^{*}=0\). If we can find a control law \(u\) that decreases a positive definite function \(V(\cdot):\mathbb{R}^{d}\to\mathbb{R}_{\geq 0}\) to zero for the dynamics (6), then, asymptotic stability is guaranteed. Such a function \(V(\cdot)\) is termed as a CLF and the formal definition is given below (Ames et al., 2019).
**Definition 1**: _A continuously differentiable function \(V(\cdot):\mathbb{R}^{d}\to\mathbb{R}_{\geq 0}\) is a Control Lyapunov Function (CLF) for the dynamical system (6) if it is positive definite and there exists a class \(\mathcal{K}_{\infty}\) function \(\alpha(\cdot):\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) that satisfies_
\[\inf_{u\in\mathcal{U}}\nabla_{x}V(x)^{\top}\left(g(x)+h(x)u\right)\leq-\alpha( V(x)),\ \forall\ x\in\mathcal{X}. \tag{7}\]
A function \(\alpha(\cdot)\) belongs to class \(\mathcal{K}_{\infty}\) if \(\alpha(0)=0\) and \(\alpha(\cdot)\) is strictly increasing. The set of all controllers that satisfy the condition in (7) for each \(x\in\mathcal{X}\) is
\[K_{clf}(x):=\{u\in\mathcal{U}:\nabla_{x}V(x)^{\top}\left(g(x)+h(x)u\right)\leq -\alpha(V(x))\}. \tag{8}\]
The following result on asymptotic stability follows from (Ames et al., 2019).
**Theorem 1**: _If there exists a Control Lyapunov Function (CLF) as given in Definition 1 for a nonlinear control affine system (6), then, any Lipschitz continuous feedback control law \(u(x)\in K_{clf}(x)\) asymptotically stabilizes the dynamical system (6) to the origin \(x^{*}=0\)._
### Control Barrier Functions
We define safety with respect to a safe set \(\mathcal{C}\subseteq\mathcal{X}\) for the system (6). The safe set \(\mathcal{C}\) is defined as the super-level set of a function \(B(\cdot):\mathbb{R}^{d}\to\mathbb{R}\), that results in three important sets:
\[\mathcal{C}=\{x\in\mathcal{X}:B(x)\geq 0\},\ \partial\mathcal{C}=\{x\in \mathcal{X}:B(x)=0\},\ \mathrm{Int}(\mathcal{C})=\{x\in\mathcal{X}:B(x)>0\}, \tag{9}\]
where \(\partial\mathcal{C}\) is the boundary for safety and \(\mathrm{Int}(\mathcal{C})\) is the interior of the safe set \(\mathcal{C}\). Our safety objective is to find a control input \(u\) such that the states \(x\) that evolve according to the dynamics (6) always stay inside the safe set \(\mathcal{C}\). Such an objective is formalized using _forward invariance_ of the safe set \(\mathcal{C}\). Let \(u(x)\) be a feedback control law such that closed loop dynamical system
\[\dot{x}=g(x)+h(x)u(x) \tag{10}\]
is locally Lipschitz. The locally Lipschitz condition guarantees the existence of a unique solution \(x(t)\) to (10) for a given initial condition \(x_{0}=x(0)\) and all \(t\in[0,t_{max})\). If \(t_{max}=\infty\), then the system (10) is forward complete (Khalil, 2015). Forward invariance and CBFs are defined as follows (Ames et al., 2019).
**Definition 2**: _The safe set \(\mathcal{C}\) is forward invariant if for every initial point \(x(0)=x_{0}\in\mathcal{C}\), the future states \(x(t)\in\mathcal{C}\) for all \(t\geq 0\)._
**Definition 3**: _Let \(\mathcal{C}\) be the super-level set of a continuously differentiable function \(B(\cdot):\mathbb{R}^{d}\to\mathbb{R}\) as given in (9). Then, \(B\) is a Control Barrier Function (CBF) for the dynamical system (6) and safe set \(\mathcal{C}\) if there exists an extended class \(\mathcal{K}_{\infty}\) function \(\gamma(\cdot)\) that satisfies_
\[\sup_{u\in\mathcal{U}}\nabla_{x}B(x)^{\top}\left(g(x)+h(x)u\right)\geq-\gamma (B(x)),\ \forall\ x\in\mathcal{X}. \tag{11}\]
We aim to render the set \(\mathcal{C}\) forward invariant for the system (10) through an appropriate choice of control input \(u(x)\). The set of all control inputs that satisfy the condition in (11) for each \(x\in\mathcal{X}\) is
\[K_{cbf}(x):=\{u\in\mathcal{U}:\nabla_{x}B(x)^{\top}\left(g(x)+h(x)u\right)\geq- \gamma(B(x))\}. \tag{12}\]
The formal result on safety follows from (Ames et al., 2019).
**Theorem 2**: _Let \(B\) be a Control Barrier Function (CBF) as given in Definition 3 for a safe set \(\mathcal{C}\) and a nonlinear control affine system (6). Let \(u(x)\in K_{cbf}(x)\) be a locally Lipschitz feedback control law. Then, the following holds: \(x(0)\in\mathcal{C}\implies x(t)\in\mathcal{C}\) for all \(t\in[0,t_{max})\). If the set \(\mathcal{C}\) is compact, then, \(\mathcal{C}\) is forward invariant, i.e., \(t_{max}=\infty\), and \(\mathcal{C}\) is asymptotically stable, i.e., \(\lim_{t\to\infty}x(t)\in\mathcal{C}\) for all \(x(0)\in\mathcal{X}\)._
Since inequalities in (8) and (12) are affine in \(u\), they can be included in efficient optimization-based controllers for control affine systems. We present such an optimization-based planner in Section 3.3 that has strong stability and safety guarantees as claimed in Theorems 1 and 2, respectively.
### Computing the Virtual Control Input
We now show how to integrate CLFs and CBFs into the DS-based motion plan (2). In particular, we use the learned NODE \(f_{\theta}\) to generate nominal motion plans, and compute \(u(x)\) using CLFs and CBFs to enforce stability and safety, resulting in a DS-based motion plan of the form:
\[\dot{x}=f_{\theta}(x)+u(x), \tag{13}\]
where \(x\) is the state of the robot, and \(u(x)\) is the virtual control input.
Stability using Control Lyapunov FunctionsWe utilize CLFs described in Section 3.1 to generate a motion plan that always converge to the target trajectory \(x^{*}(t)\) even in the presence of disturbances. Previous work (Khansari-Zadeh and Billard, 2014) have utilized CLFs only for convergence to a single target point using regression methods. In contrast, we present a framework that integrates Neural ODEs for rich behaviors, CBFs for safety, and CLFs to ensure convergence to a target trajectory \(x^{*}(t)\). To that end, we first define the error \(e(t)\) between the robot state and the target trajectory: \(e(t)=x(t)-x^{*}(t)\). For ease of notation, we drop the explicit dependence on time \(t\), and write \(e\), \(x\), and \(x^{*}\) for the current error, state, and target point at time \(t\), respectively. From (13), the error dynamics are given by
\[\dot{e}=\dot{x}-\dot{x}^{*}\Rightarrow\dot{e}=f_{\theta}(x)-\dot{x}^{*}+u(x) \tag{14}\]
The error dynamics (14) define a nonlinear control affine system (6), where the state of the system is \(e\), and \(u(x)\) is the control input. Hence, by Theorem 1, if there exists a CLF \(V(\cdot)\) for the error dynamics (14), then, any feedback virtual control law \(u(\cdot)\) that satisfies
\[\nabla_{e(t)}V(e)^{\top}\left(f_{\theta}(x)-\dot{x}^{*}+u(x)\right)\leq- \alpha(V(e)),\ \forall\ e\in\mathbb{R}^{d} \tag{15}\]
will drive the error asymptotically to zero. During online motion planning, given the current state of the robot \(x\) and information about the target trajectory \(x^{*}\) that encodes the desired task, we compute the smallest \(u(x)\) that satisfies (15) by setting
\[u(x)=\underset{v}{\text{argmin}}\big{\|}v\big{\|}_{2}^{2}\quad\text{s.t.}\quad \nabla_{e}V(e)^{\top}\left(f_{\theta}(x)-\dot{x}^{*}+v\right)\leq-\alpha(V(e)), \tag{16}\]
where \(\alpha(\cdot)\) is a class \(\mathcal{K}_{\infty}\) function that defines how aggressively the robot tracks the target trajectory. We described how we choose \(x^{*}\) and \(\dot{x}^{*}\) in detail in Section 3.3. The optimization problem (22) is a Quadratic
Program (QP) with a single affine inequality and has a closed form solution (Khansari-Zadeh and Billard, 2014). The Lyapunov function we use is \(V(e)=\|e\|_{2}^{2}\), but, note that any positive definite function is a valid CLF due to the presence of the virtual actuation term \(v\), i.e., optimization problem (22) is always feasible. We refer the reader to Fig. 8 to differentiate between the paths generated by only \(f_{\theta}(\cdot)\), and by (13) using the correction term \(u(\cdot)\). We refer to this approach as the CLF-NODE.
Safety using Control Barrier FunctionsWe build on the framework presented in Section 3.2 by integrating CBFs into the virtual control input computation to guarantee safety for the generated motion plan. We define safety with respect to a safe set \(\mathcal{C}\subseteq\mathcal{X}\) as described in Section 3.2 for the system (13). From Theorem 2, if there exists a CBF \(B(\cdot)\) for the dynamics (13), then, any feedback control law \(u(\cdot)\) that satisfies
\[\nabla_{x}B(x)^{\top}\left(f_{\theta}(x)+u(x)\right)\geq-\gamma(B(x)),\ \forall\ x\in \mathcal{X} \tag{17}\]
will render the system (13) safe, where, \(\gamma(\cdot)\) is an extended class \(\mathcal{K}_{\infty}\) function. At inference, the DS-based motion plan is still given by (13), but the virtual control input \(u(x)\) is computed such that it satisfies the CBF condition in (17) for the dynamics \(\dot{x}\) and a given CBF \(B(\cdot)\) for the safe set \(\mathcal{C}\).
In cases where an obstacle obstructs the robot moving along the nominal trajectory, the robot should automatically avoid the obstacle without human intervention, but converge back to complete the desired task when possible. However, this may lead to a conflict between preserving safety and stability: during the obstacle avoidance phase, the CLF constraint in (22) may be violated as the robot takes safety preserving actions that increase tracking error. We prioritize safety and obstacle avoidance and adapt the approach proposed in (Ames et al., 2019) for balancing these competing objectives to our setting, and solve an optimization problem with the CBF condition (17) as a hard constraint and the CLF condition (15) as a soft constraint. Given the current state of the robot \(x\) and the target point \(x^{*}\), the optimization problem that guarantees a safe motion plan is
\[\begin{split}(u(x),\_)=\operatorname*{argmin}_{\{v,\epsilon\}} \quad\left\|v\right\|_{2}^{2}+\lambda\epsilon^{2}&\text{s.t.} \quad\nabla_{x}B(x)^{\top}\left(f_{\theta}(x)+v\right)\geq-\gamma(B(x))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\nabla_{ e}V(e)^{\top}\left(f_{\theta}(x)-\dot{x}^{*}+v\right)\leq-\alpha(V(e))+ \epsilon\end{split} \tag{18}\]
where \(\epsilon\) is a relaxation variable to ensure feasibility of (24) and is penalized by \(\lambda>0\). The problem in (24) is a parametric program, where the parameters of interest are \(\{x,x^{*},\dot{x}^{*}\}\). We abuse notation and denote the optimal virtual control input \(u(x)\) for (24) to be \(u(x,x^{*},\dot{x}^{*})\) which will be used in the next section. Problem (24) is a QP that can be solved efficiently in real-time (Mattingley and Boyd, 2012). Multiple CBFs and CLFs can be composed in a way analogous to problem (24) to represent multiple obstacles. We refer to this approach as the CLF-CBF-NODE.
```
Input:\(\mathcal{T}:=\{x^{*}(t_{k})\}_{k=1}^{T},f_{\theta}(\cdot),x,N\) Output:\(\pi(x)\) \(m\leftarrow\operatorname*{argmin}_{k}\left\|x-x^{*}(t_{k})\right\|_{2}\); \(\mathcal{T}_{N}:=\{x^{*}(t_{m}),x^{*}(t_{m+1}),\dots,x^{*}(t_{N})\}\); Solve (24) with parameters \(\{x,y,f_{\theta}(y)\}\) for each \(y\in\mathcal{T}_{N}\); \(\pi(x)\leftarrow\operatorname*{argmin}_{y\in\mathcal{T}_{N}}\left\|u(x,y,f_{ \theta}(y))\right\|_{2}^{2}\);
```
**Algorithm 1**Choose target point
Choosing a Target PointAs shown in Fig. 2, we first integrate the learnt model \(f_{\theta}(\cdot)\) offline to generate the _target array_\(\mathcal{T}:=\{x^{*}(t_{k})\}_{k=1}^{T}\) from a given initial condition \(x^{*}(t_{1})\). Given an observation of the current state of the robot \(x\) at time \(t\), and the target array \(\mathcal{T}\), we select the next target point \(x^{*}\) for the robot to follow using the map \(\pi(x)\) defined in Algorithm 2. We remove the direct dependence of the target point \(x^{*}\) on time \(t\), which leads to a more reactive motion plan that adapts to both the time delays that are often present during online deployment, and to unforeseen perturbations of the robot away from the nominal plan, e.g., due to human interaction or obstacle
avoidance. The look-ahead horizon length \(N\) is used to construct the look-ahead array \(\mathcal{T}_{N}\), consisting of \(N\) future points starting at the current state \(x^{*}(t_{m})\). We choose the target point \(\pi(x)\) from \(\mathcal{T}_{N}\) that results in the smallest norm of virtual control input when solving (24) amidst all points in \(\mathcal{T}_{N}\). We use a forward looking horizon \(N\) to ensure the robot moves forward along the target trajectory, and to the best of our knowledge, this is the first time that the norm of the correction input \(u(\cdot)\) is used as a metric for choosing an appropriate nearest neighbor point in motion planning. We use \(\dot{x}^{*}:=f_{\theta}(\pi(x))\), since \(\pi(x)\in\mathcal{T}\) and we obtained the target array \(\mathcal{T}\) by integrating \(f_{\theta}(\cdot)\).
## 4 Experimental Validation and Results
**LASA handwriting dataset** We validate our approach on the LASA 2D handwriting data set (Khansari-Zadeh and Billard, 2011) that contains 30 nonlinear motion sets. Each set has 7 demonstrations: we use 4 as the training data set, and the remaining three as the test data set. The performance metric we use is Dynamic Time Warping (DTW) distance (Salvador and Chan, 2007) to compare our NODE model with two existing DS-based learning approaches: SEDS (Khansari-Zadeh and Billard, 2011) and LPV-DS (Figueroa and Billard, 2018). DTW distance measures the dissimilarity between the shapes of the demonstrations and the corresponding reproductions starting from the same initial condition. The DTW distance comparison is given in Figs. 2(a) and 2(b). We note that although SEDS and LPV-DS use velocity data for regression, which our approach does not have access to, the DTW distance for our NODE approach is approximately half that of existing methods (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2018). We illustrate disturbance rejection in Fig. 3(a) using CLF-NODE and obstacle avoidance in Fig. 3(b) using CLF-CBF-NODE. Further validation on other nonlinear shapes are given in the appendix.
**Periodic trajectories**: We validate our approach on handwritten 2D periodic motions of the letters **I, R, O** and **S** given in (Urain et al., 2020) and 3D periodic trajectories that encode three wiping tasks as given in Figs. 4(e), 5(e), 6(e), and 7(e). We compare our method with Imitation Flow (IFlow) (Urain et al., 2020) and a Gaussian Process (GP) (Jaquier et al., 2019) based approach. IFlow is based on normalizing flows that learns stable motions using prior knowledge on whether the underlying dynamics of the demonstrations has a single attractor or a limit cycle, but our approach requires no prior knowledge. We present the DTW distance in Fig. 4(a), training time in Fig. 4(b) and execution time in Fig. 4(c). The execution time is the computation time for a single forward pass of the model to generate the entire trajectory from a given initial point and the time
Figure 3: Comparison of DTW distance on the (a) Train data set and (b) Test data set from LASA.
span of the demonstration. We also compare the trajectory reproductions for the \(\mathbf{R}\) shape in Fig. 4(d), and for the spiral wiping task by the Franka robot arm in Fig. 4(e). The IFlow approach is not able to learn the complex motion for spiral wiping, but our NODE approach learns with high accuracy and lesser computation time. The execution time comparison in Fig. 4(c) is plotted in log scale and we note that our approach (NODE) has much lesser execution time, which is important for real-time robot experiments. Although the GP based method (Jaquier et al., 2019) is able to learn complex trajectories with comparable accuracy and training time to NODE, the execution time is much smaller and they rely on time inputs for desired roll outs with no capability to generate safe and stable motion plans. All computations are performed on Google Colab.
**Robotic experiments** We validate our approach on the Franka Emika robot arm performing two complex nonlinear motions: wiping a human mannequin with a towel as shown in Fig. 6 and wiping a white board with an eraser as given in Fig. 7. We use the passive DS impedance controller given in (Kronander and Billard, 2016). We used \(D=2\) demonstrations for the mannequin tasks, and \(D=3\) demonstrations for the board wiping tasks. Each demonstration had between \(T=300\) and \(T=600\) data samples. The average training time (offline) is \(3-6\) minutes for each task on Google Colab. The obstacle shown in Fig. 7 at \(t=2\) has markers on it that are tracked in real-time by OptiTrack motion capture system. We observe that the robot tracks the desired nominal trajectories while remaining compliant to human interaction, robust to perturbations, and safe with respect to unforeseen and dynamic obstacles. We include the stirring task and other implementation details in the appendix.
Further illustrations of disturbance rejection, obstacle avoidance and trajectory reproductions are presented in [https://sites.google.com/view/lfd-node-clf-cbf/home](https://sites.google.com/view/lfd-node-clf-cbf/home).
## 5 Limitations & Future Work
The main contribution of this work is a NODE DS-based motion plan with an additive CBF and CLF-based correction term computed with respect to a time-varying target trajectory that ensures safety, stability, and when possible, task-completion. A limitation inherent to all CBF and CLF-based approaches is the risk of converging to a local minimum due to conflicting safety and task-completion constraints--this is a broader research question which falls beyond the scope of this paper. Our experiments are limited to using the Cartesian end-effector position \(x\in\mathbb{R}^{3}\): future work will address this limitation by extending our
Figure 4: Illustration of (a) disturbance rejection and (b) obstacle avoidance using our approach.
approach to higher dimensional coordinate frames such as (a) orientation in \(\mathcal{SO}(3)\) and position in \(\mathbb{R}^{3}\) of the end-effector (Figueroa et al., 2020; Ravichandar and Dani, 2019; Zhang et al., 2022), (b) full pose of end-effector in \(\mathcal{SE}(3)\) space (Urain et al., 2022), and (c) joint space (Shavit et al., 2018). Another limitation is our simple encoding of obstacles via sublevel sets of CBFs using closed-form expressions: to address this limitation, future work will explore novel (data-driven) representations of obstacles in joint space, e.g., via implicit distance functions (Koptev et al., 2023). Finally, since learning is performed offline, we can use a NN model that is more expensive to train than other DS-based motion planning methods (Khansari-Zadeh and Billard, 2011; Figueroa and Billard, 2022), which increases training time. Exploring different DS-based models that reduce training time without sacrificing expressivity is another important direction for future work.
## Acknowledgements
We thank Rahul Ramesh, Leon Kim, Anusha Srikanthan, Alp Aydinoglu, and Yifan Xue for their valuable inputs and feedback.
Figure 5: Comparison of performance metrics and trajectory reproductions between IFlow, GP and our approach (NODE) on periodic trajectories.
Figure 6: Franka Emika robot arm performing a periodic wiping task for a human mannequin. The blue arrows denote the perturbation.
Figure 7: Franka Emika robot arm performing a periodic wiping task on a whiteboard. The purple spheres is the moving obstacle at different time. |
2310.20593 | FLODCAST: Flow and Depth Forecasting via Multimodal Recurrent
Architectures | Forecasting motion and spatial positions of objects is of fundamental
importance, especially in safety-critical settings such as autonomous driving.
In this work, we address the issue by forecasting two different modalities that
carry complementary information, namely optical flow and depth. To this end we
propose FLODCAST a flow and depth forecasting model that leverages a multitask
recurrent architecture, trained to jointly forecast both modalities at once. We
stress the importance of training using flows and depth maps together,
demonstrating that both tasks improve when the model is informed of the other
modality. We train the proposed model to also perform predictions for several
timesteps in the future. This provides better supervision and leads to more
precise predictions, retaining the capability of the model to yield outputs
autoregressively for any future time horizon. We test our model on the
challenging Cityscapes dataset, obtaining state of the art results for both
flow and depth forecasting. Thanks to the high quality of the generated flows,
we also report benefits on the downstream task of segmentation forecasting,
injecting our predictions in a flow-based mask-warping framework. | Andrea Ciamarra, Federico Becattini, Lorenzo Seidenari, Alberto Del Bimbo | 2023-10-31T16:30:16Z | http://arxiv.org/abs/2310.20593v1 | # FLODCAST: Flow and Depth Forecasting via Multimodal Recurrent Architectures
###### Abstract
Forecasting motion and spatial positions of objects is of fundamental importance, especially in safety-critical settings such as autonomous driving. In this work, we address the issue by forecasting two different modalities that carry complementary information, namely optical flow and depth. To this end we propose FLODCAST a flow and depth forecasting model that leverages a multitask recurrent architecture, trained to jointly forecast both modalities at once. We stress the importance of training using flows and depth maps together, demonstrating that both tasks improve when the model is informed of the other modality. We train the proposed model to also perform predictions for several timesteps in the future. This provides better supervision and leads to more precise predictions, retaining the capability of the model to yield outputs autoregressively for any future time horizon. We test our model on the challenging Cityscapes dataset, obtaining state of the art results for both flow and depth forecasting. Thanks to the high quality of the generated flows, we also report benefits on the downstream task of segmentation forecasting, injecting our predictions in a flow-based mask-warping framework.
keywords: depth forecasting, optical flow forecasting, segmentation +
Footnote †: journal: Pattern Recognition
## 1 Introduction
Improving intelligent capabilities, in the context of robot navigation and autonomous agents, is fundamental to allow machines to better understand the observed scene and thus reason about it. These systems exploit sensors such as cameras or LiDARs to extract a visual signal from the environment in order to take action and interact with the world. However, leveraging only the current frame to plan real-time decisions is challenging since dynamic scenes rapidly
change over time. Agents must understand how other objects are moving and must foresee possible dangerous outcomes of their decisions. A prominent direction with potential application in decision-making is to make predictions about future scenarios, which can also be used to detect upcoming events or behaviors in advance. This task is highly challenging in situations where multiple objects, like vehicles or people, can move freely in the environment.
The problem can be addressed from many angles, including understanding where agents will be in the near future, what actions they will take, how they will move, and how far they will be from a given observation point. In practice, this translates into exploiting different features describing the scene or specific objects. For instance, road layout supports the agent in defining where to drive, while semantic segmentation contains pixel-level annotations of specific categories, e.g. road, buildings, cars or pedestrians, and gives a finer-grained knowledge of the scene. However, predictions may also regard future instance segmentations, allowing a machine to reason about single objects rather than category classes. One way to summarize scene changes is to capture motion properties observed from a camera viewpoint. Optical flow is a dense field of displacement vectors and represents the pixel motion of adjacent frames [1]. Therefore, object motion can be incorporated in terms of 2D displacements using optical flow, even for future unobserved motion. Nonetheless, in order to understand scene dynamics it is also considerable to predict depth maps to better identify objects in a 3D space. Such information can be estimated in advance for the near future and incorporated into a decision-making system that assists an autonomous agent to early plan the subsequent action to be taken. Future prediction also involves information related to the surrounding environment. Therefore, this task can be accomplished by forecasting semantic segmentations [2; 3; 4], which are connected to specific category classes, but also predicting future instance segmentations of moving objects [5; 6; 7; 8; 9], even considering optical flow predictions [10; 9].
In summary, one can cast the forecasting problem from a high-level perspective, for instance forecasting semantic masks [5; 11] or agent trajectories [12; 13], as done in prior work. We instead choose to address the problem from a lower level, forecasting finer-grained information such as pixel-level optical flows and depth maps, which can then be leveraged to reason about high-level aspects such as forecasting semantic instances. In this work, we focus on anticipating imminent future urban scenarios, by casting the problem in a multi-modal and multitasking approach, able to forecast both optical flows, which encode pixel displacements in the scene, and depth maps, which represent the estimated distance from the camera to the corresponding point in the image.
Instead of anticipating the future for the next time step [14; 15] or in general for a single specific one [16], we propose to directly forecast multiple time steps ahead at a time, yet maintaining the model autoregressive to avoid the need of training timestep-specific models. Jointly forecasting depth and flow helps to achieve better performance in future predictions, thanks to information sharing across modalities. In addition, training with long-term supervision leads to smaller errors at inference time. As a byproduct, we also leverage the re
cently proposed MaskNet [9] to improve the downstream task of future instance segmentation in urban scenes with our predictions.
To summarize, our main contributions are the following:
1. We design a novel optical FLOW and Depth foreCASTing network (FLODCAST) that jointly estimates optical flow and depth for future frames autoregressively.
2. Our approach, which involves predicting multiple steps simultaneously, mitigates the accumulation of errors that typically impede the performance of autoregressive models. In this way, we preserve the autoregressive nature of the model, eliminating the need for training separate models for different time horizons.
3. Finally, FLODCAST achieves state-of-the-art performance in both optical flow and depth forecasting tasks, thereby emphasizing the necessity of jointly learning shared features from these two modalities.
## 2 Related work
Depth ForecastingSeveral works have focused on learning to infer depth from monocular RGB cameras [17; 18; 19]. Nonetheless, relying on depth estimators on predicted future RGBs is hard, due to high uncertainty in predicting raw pixels [20; 21; 22; 23; 24]. Therefore, other works propose to deal with depth anticipation for future frames, mostly known in the literature as depth forecasting or video depth forecasting. Qi et. al [14] introduce an entire framework for predicting 3D motion (both optical flow and depth map) and synthesizing the RGB with its semantic map for unobserved future frames. To this end, they leverage images, depth maps and semantic segmentations of past frames but they make predictions limited to the subsequent future frame, i.e. at the frame \(t+1\). Also limited to a single future timestep, Hu et. al [15] design a probabilistic model for future video prediction, where scene features are learned from input images and are then used to build spatio-temporal representations, incorporating both local and global contexts. These features are finally fed into a recurrent model with separate decoders, each one forecasting semantic segmentation, depth and dense flow at the next future frame. Nag et. al [16] propose a self-supervised method for depth estimation directly at the k-th frame after the last observed one, i.e. at \(t+k\). By means of a feature forecasting module, they learn to map pyramid features extracted from past sequences of both RGBs and optical flows to future features, exploiting a series of ConvGRUs and ConvLSTMs for spatio-temporal relationships in the past. With the same goal, Boulahbal et. al [25] design an end-to-end self-supervised approach by using a hybrid model based on CNN and Transformer that predicts depth map and ego-motion at \(t+k\) by processing an input sequence of past frames. Differently from prior work, we predict both dense optical flows and depth maps, also leveraging both modalities as inputs. We directly predict several timesteps ahead
simultaneously while retaining autoregressive capabilities, that allows the model to accurately predict far into the future.
Flow ForecastingOptical flow estimation has been largely studied in the past [26; 1]. Consolidated deep learning approaches have addressed this problem with promising results [27; 28; 29], also exploiting transformer-based architectures [30; 31; 32]. However, these methods are designed to estimate the optical flow by accessing adjacent frames as they are available to the network. Different approaches have been introduced incorporating optical flow features to infer imminent future scenarios under different points of view, such as predicting depth maps [16], semantic segmentations [3; 4] and instance segmentations [9]. Multitasking methods also exist [10; 33; 14].
Many works leverage motion features for future predictions to perform several specific tasks, ranging from semantic segmentation [10; 2; 3; 4], instance-level segmentation [9] and depth estimation [14; 15; 16]. However, just a few approaches have specifically addressed the task of optical flow forecasting, i.e. the problem of anticipating the optical flow for future scenes. Jin et. al [10] was the first to propose a framework, which jointly predicted optical flow and semantic segmentation for the next frame using the past ones. To make predictions for multiple time steps, they just iterate a two-step finetuned model so to alleviate the propagation error. Ciamarra et. al [9] instead introduced OFNet, a recurrent model able to predict the optical flow for the next time step exploiting spatio-temporal features from a ConvLSTM. Such features are learned to generate a sequence of optical flows shifted by one time step ahead from the input sequence. Without finetuning, the recurrent nature of the model allows OFNet to make predictions for any time steps ahead. Considering the high uncertainty of the future, all the proposed methods [3; 10; 33; 14; 9] are typically trained to make predictions at the single time step ahead, and then used for the future ones by autoregressively providing in input the predictions obtained at the previous iterations. We, instead, address a more general forecasting task, with the purpose of providing future optical flows directly for multiple time steps ahead, by exploiting both past flows and the corresponding depth maps. We also make use of depth maps as input because our framework is designed as a novel multitask and multimodal approach to also generate future depth maps.
To the best of our knowledge, we are the first to jointly forecast optical flows and depth maps for multiple consecutive frames into the future. Besides, we do not require other information (even during training), like camera pose estimation, which is usually needed to deal with monocular depth estimation.
## 3 Method
In this work we introduce FLODCAST, a novel approach for predicting optical flow and depth map jointly for future unobserved frames from an ego-vehicle perspective applied to autonomous driving context.
### Problem Definition
Given a sequence \(\mathbf{S}=\{I_{t}\}\) of frames, let \(\mathbf{D}=\{D_{1},D_{2},\ldots,D_{T}\}\) be the depth map sequence extracted from the last T frames of \(\mathbf{S}\). Likewise, we define \(\mathbf{OF}=\{OF_{1},OF_{2},\ldots,OF_{T}\}\) the corresponding optical flows computed every two consecutive frames in \(\mathbf{S}\), such that \(OF_{t}=\textit{Flow}(I_{t-1},I_{t})\), with \(t\in[1,T]\), encodes the motion of the source frame \(I_{t-1}\) onto the target frame \(I_{t}\). Our purpose is to anticipate flow and depth maps for future frames after \(K\) time instants, i.e. forecasting \(D_{T+K}\) and \(OF_{T+k}\) for the frame \(I_{T+K}\).
The importance of jointly anticipating flow and depth stems from the nature of the two modalities. Optical flow is a two-dimensional projection of the three-dimensional motion of the world onto the image plane [34]. An object in the foreground moving fast produces a large displacement, whereas when it comes far from the observer, moving at the same speed, it generates a very small displacement. Therefore, knowledge about the depth of such an object can help to model its future dynamics. Vice-versa, observing the motion of an object can provide information about its distance from the camera. Overall, by jointly modeling optical flow and depth we can represent the 3D scene displacement at time \(t\) in terms of the components \((u,v,d,t)\), where \((u,v)\) are the horizontal and vertical components of \(OF_{t}\) and \(d\) is the depth map.
### Flow and Depth Forecasting via Multimodal Recurrent Architectures
We design **FLODCAST**, a novel optical **FLO**w and **D**epth fore**CAST**ing network that anticipates both modalities at each future time step by observing the past ones. An overview of FLODCAST is shown in Fig. 1.
FLODCAST takes a sequence \(X=\{X_{1},X_{2},\ldots,X_{T}\}\) of \(T\) past observations composed of dense optical flows and depth maps. In detail, each \(X_{t}\) encodes the
Figure 1: FLODCAST forecasts both future flows and depth maps from the past ones autoregressively. For each time step, we aggregate flow and depth at the last channel (by the concatenation operator, \(\oplus\)), then 64-channel features are extracted through a UNet [35] backbone. Finally, predictions are obtained from two dedicated fully convolutional heads.
input features for the image \(I_{t}\) in the past, that are obtained by concatenating the optical flow \(OF_{t}\) with the depth map \(D_{t}\). In other words, \(X_{t}=(OF_{t}\oplus D_{t})\). The model generates as output a sequence \(\widehat{X}=\{\widehat{X}_{T+1},\widehat{X}_{T+2},\ldots,\widehat{X}_{T+K}\}\), that is a sequence of \(K\) future optical flows and \(K\) depth maps. We set \(T=3\) and \(K=3\) in all our experiments.
Since optical flows and depth maps encode very different information about the scene, we add two separate heads after extracting features from the input in order to handle multimodal predictions. Therefore, we feed in input a sequence of concatenated optical flows and depths \(\{X_{1},X_{2},\ldots,X_{T}\}\) to a recurrent ConvLSTM network, in which a UNet backbone is used to extract features at 64 channels for each input \(X_{t}\), \(t=1,\ldots,T\), so to output a tensor of size \((H\times W\times 64)\), where \((H\times W)\) is the input resolution. Our feature extractor is the same UNet architecture as in [9], i.e. a fully convolutional encoder-decoder network with skip connections, consisting of 5 layers with filters \(\{64,128,256,512,1024\}\) respectively. These 64-channel features capture meaningful spatio-temporal contexts of the input representation. The features are then passed to the two convolutional heads, which are end-to-end trained to simultaneously generate the sequence of future optical flows and depth maps (respectively depicted by the purple and the red blocks in the right side of Fig. 1). Each head is a fully convolutional network made of sequences of Conv2D+ReLUs with \(\{32,16,8\}\) filters. Finally, we append at the end of the optical flow head a convolution operation with \(2\times K\) channels and we use a \(tanh\) activation function, so to produce the \((u,v)\) flow field values normalized in \((-1,1)\). Instead, after the depth head, we attach a convolution operation with a \(K\) channels and a sigmoid activation in order to get depth maps normalized in \((0,1)\). Instead of outputting one prediction at a time as in prior work [9], we directly generate \(K\) flows and depth maps simultaneously, to make the model faster compared to autoregressive models which would require looping over future steps.
### Loss
To train FLODCAST we compute a linear transformation of the original input values, by rescaling depth map values in \([0,1]\) and optical flows in \([-1,1]\) through a min-max normalization, with minimum and maximum values computed over the training set. Inspired by [36], we use the reverse Huber loss, called _BerHu_ for two main reasons: (i) it has a good balance between the two L1 and L2 norms since it puts high weight towards values with a high residual, while being sensitive for small errors; (ii) it is also proved to be more appropriate in case of heavy-tailed distributions [36], that perfectly suits our depth distribution, as shown in Fig. 2. BerHu minimizes the prediction error, through either the L2 or L1 loss according to a specific threshold \(c\) calculated for each batch during the training stage. Let \(x=\hat{y}-y\) be the difference between the prediction and the corresponding ground truth. This loss \(\mathcal{B}(x)\) is formally defined as:
\[\mathcal{B}(x)=\begin{cases}|x|,&|x|\leq|c|\\ \frac{x^{2}+c^{2}}{2c},&\text{otherwise}\end{cases} \tag{1}\]
Thus, we formulate our compound loss, using a linear combination of the optical flow loss \(\mathcal{L}_{\text{flow}}\) and the depth loss \(\mathcal{L}_{\text{depth}}\) (Eq. 2):
\[\mathcal{L}=\alpha\,\mathcal{L}_{\text{flow}}+\beta\,\mathcal{L}_{\text{depth}} \tag{2}\]
Specifically, we apply the reverse Huber loss to minimize both the optical flow and depth predictions, using the same loss formulation, since the threshold \(c\) is computed for each modality, and that value depends on the current batch data. Therefore, \(\mathcal{L}_{\text{flow}}\) is the loss function for the optical flow computed as:
\[\mathcal{L}_{\text{flow}}=\frac{1}{M}\sum_{j=1}^{M}\mathcal{B}(|OF_{j}-\widehat {OF}_{j}|) \tag{3}\]
where \(M=B\times R\times 2\), since the flow field has \((u,v)\) components over \(R\) image pixels and \(B\) is the batch size, whereas \(OF_{j}\) and \(\widehat{OF}_{j}\) are the optical flows, respectively of the ground truth and the prediction at the pixel \(j\). Likewise, we do the same for the depth loss \(\mathcal{L}_{\text{depth}}\):
\[\mathcal{L}_{\text{depth}}=\frac{1}{P}\sum_{j=1}^{P}\mathcal{B}(|D_{j}- \widehat{D}_{j}|) \tag{4}\]
where \(P=B\times R\), \(D_{j}\) and \(\widehat{D}_{j}\) are the depth maps, respectively of the ground truth and the prediction at the pixel \(j\). We follow [36] and we set \(c=\frac{1}{5}max_{j}(|y_{j}-\hat{y}_{j}|)\), i.e. the 20% of the maximum absolute error between predictions and ground truth in the current batch over all pixels.
## 4 Results
In this section we report our forecasting results on Cityscapes [37] for the depth and flow forecasting tasks. We first describe the experimental setting and the metrics used to evaluate our approach. Then, we present our results, comparing FLODCAST to state-of-the-art approaches. We also present ablation studies to better highlight the importance of all the modules in the architecture. Besides, in Sec. 5, we show that our approach can be easily applied to downstream tasks such as semantic segmentation and instance segmentation forecasting, demonstrating improvements, especially at farther prediction horizons.
### Dataset
For evaluation, we use Cityscapes [37], which is a large urban dataset with very challenging dynamics, recorded in several German cities. Each sequence consists of 30 frames at a resolution of \(1024\times 2048\). Cityscapes contains 5000 sequences, split in 2975 for train, 500 for validation and 1525 for testing. Different annotations are available. In particular, we leverage precomputed disparity maps for all frames, from which depth maps can be extracted through the camera parameters. There are also both instance and semantic segmentations that are available at the 20-th frame of each sequence.
### Experimental setting
We compute optical flows using FLowNet2 [27] (pretrained FlowNet2-c) and rescale them according to the maximum and minimum values in the training set, so to have normalized values in \((-1,1)\). Depth maps \(D\) are obtained using disparity data \(d\) and camera parameters (focal length \(f\) and baseline \(b\)), i.e. by computing \(D=f\cdot b/d\). Invalid measurements or zero-disparity values are set to \(0\). To normalize depth maps, we observe that most depth values fall within \(150\)m in the training set (Fig. 2). Thus, we cap values at \(150\)m and then normalize them in \((0,1)\). All frames are rescaled at \(128\times 256\) px for both data sources to accelerate learning. We train FLODCAST for \(30\) epochs using Adam and learning rate \(0.0001\). To balance the two losses in Eq. 2, we set \(\alpha=10\) and \(\beta=1\). At inference time we recursively employ the model by feeding as input previous predictions to reach farther time horizons. We provide outputs at a resolution of \(256\times 512\), following [38], by doubling the resolution. FLODCAST has approximately \(31.4\)M trainable parameters. The whole training takes \(58\) hours on a single GPU NVIDIA Titan RTX with \(24\)GB using a batch size of \(12\).
### Evaluation metrics
We quantitatively evaluate depth forecasting using standard metrics as in [39]: (i) absolute relative difference (AbsRel), (ii) squared relative difference (SqRel), (iii) root mean squared error (RMSE) and (iv) logarithmic scale-invariant RMSE (RMSE-Log), defined as follows:
\[\text{AbsRel}=\frac{1}{N}\sum_{i=1}^{N}\frac{|y_{i}-\hat{y}_{i}|}{y_{i}}\quad \text{(5)}\qquad\qquad\text{RMSE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}|y_{i}-\hat {y}_{i}|^{2}} \tag{6}\]
\[\text{SqRel}=\frac{1}{N}\sum_{i=1}^{N}\frac{(y_{i}-\hat{y}_{i})^{2}}{y_{i}} \quad\text{(7)}\quad\text{RMSE-Log}=\frac{1}{N}\sum_{i=1}^{N}d_{i}^{2}-\frac {1}{N^{2}}\left(\sum_{i=1}^{N}d_{i}\right)^{2} \tag{8}\]
Figure 2: Distribution of depth values grouped by distance on the Cityscapes training set. Note that depth values below \(3\) meters are not present in the dataset.
where \(y\) and \(\hat{y}\) are the ground truth and the prediction, each with \(N\) pixels indexed by \(i\), while \(d=\log\hat{y}-\log y\) is their difference in logarithmic scale. AbsRel and SqRel are errors that can be also calculated at pixel-level, instead RMSE, RMSE-Log measure mistakes averaged on the whole image. In particular, AbsRel draws attention to the absolute difference between the prediction and the target with respect to the ground truth itself (e.g. an AbsRel of 0.1 means that the error is 10% of the ground truth), which makes it suitable for a fine-grained understanding. The SqRel instead emphasizes large errors since the difference is squared. RMSE is the root of the mean squared errors while RMSE-Log, introduced in [39], is an L2 loss with a negative term used to keep relative depth relations between all image pixels, i.e an imperfect prediction will have lower error when its mistakes are consistent with one another.
We also measure the percentage of inliers with different thresholds [39], i.e. the percentage of predicted values \(\hat{y}_{i}\) for which the ratio \(\delta\) with the ground truth \(y_{i}\) is lower than a threshold \(\tau\):
\[\%\ \text{of}\ \hat{y}\ \ \text{s.t.}\ \ \ \max\left(\frac{y_{i}}{\hat{y}_{i}}, \frac{\hat{y}_{i}}{y_{i}}\right)=\delta<\tau \tag{9}\]
with \(\tau=\{1.25,\,1.25^{2},\,1.25^{3}\}\).
We assess the performance of the flow forecasting task, by computing the mean squared error between the prediction and the groundtruth on both the two flow channels, using Eq. 10, and averaging them, as done in [9]:
\[\text{MSE}_{c}=\frac{1}{H\,W}\sum_{i=1}^{H}\sum_{j=1}^{W}\left(f_{c}(i,j)- \widehat{f}_{c}(i,j)\right)^{2} \tag{10}\]
where \(\text{MSE}_{c}\) is the error referred to the channel \(c:=\{u,v\}\) between the ground truth optical flow field \(f_{c}(i,j)\) and the prediction \(\widehat{f}_{c}(i,j)\) at the pixel \((i,j)\) and H and W is height and width respectively. We also report the average end-point-error EPE [40], which measures the per-pixel euclidean distance between the prediction and the ground truth averaged among all the image pixels:
\[\text{EPE}=\frac{1}{H\,W}\sum_{i=1}^{H\,W}\sqrt{(\hat{u}_{i}-u_{i})^{2}+(\hat {v}_{i}-v_{i})^{2}} \tag{11}\]
where \((u_{i},v_{i})\) are the horizontal and vertical components of the optical flow ground truth, likewise \((\hat{u_{i}},\hat{v_{i}})\) are the corresponding components of the prediction, at the \(i-th\) pixel.
### Future Depth Estimation
We evaluate our approach for future depth estimation on Cityscapes. As in prior works, e.g. [15], we evaluate our method after \(t+k\) frames, both at short-term (\(k=5\), after 0.29 sec) and at mid-term (\(k=10\), after 0.59 sec).
Since there is no official evaluation protocol for depth forecasting on Cityscapes and considering the statistics in the training set (see Fig. 2), in which pixel occurrences strongly decrease as the depth increase, we clip values at 80 meters as done in prior work for depth estimation [38, 41].
For our experiments, we evaluate predictions using the same protocol of [38], i.e. by cropping out the bottom 20% of the image to remove the car hood, which is visible in every frame, then we rescale the frames at \(256\times 512\). In addition, we mask out ground truth pixels that are farther than the 80m threshold.
We compare our approach with existing methods [14, 15, 16]. We also consider the depth estimation method of [42], which is adapted to depth forecasting through a multi-scale F2F [5] before the decoder, and the future instance segmentation model [6] adapted to generate future depth estimation of the predicted features, as previously done in [16]. We also report the trivial _Copy last_ baseline [16], as a lower bound. Quantitative results for depth forecasting are reported in Table 1.
We exceed all the previous methods at short-term and mid-term predictions. Specifically, we beat all the existing approaches at short-term by a large margin for all the metrics, also reporting the highest inlier percentage. At mid-term term we exceed all the state-of-the-art approaches, in terms of AbsRel and SqRel, including the recent DeFNet (-42% and -8%), which employs both RGB frames and optical flows, even considering the camera pose during the training. Differently from DeFNet, we exploit depth maps and optical flows as sources of information, since they provide complementary features related to motion and
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline \multicolumn{8}{c}{Short term \(k=5\)} \\ \hline & \multicolumn{4}{c|}{Lower is better \(\downarrow\)} & \multicolumn{4}{c}{Higher is better \(\uparrow\)} \\ \hline Method & AbsRel & SqRel & RMSE & RMSE-Log & \(\delta<1.25\) & \(\delta<1.25^{2}\) & \(\delta<1.25^{3}\) \\ Copy last & 0.257 & 4.238 & 7.273 & 0.448 & 0.765 & 0.893 & 0.940 \\ \hline Qi et al. [14] & 0.208 & 1.768 & 6.865 & 0.283 & 0.678 & 0.885 & 0.957 \\ Hu et al. [15] & 0.182 & 1.481 & 6.501 & 0.267 & 0.725 & 0.906 & 0.963 \\ Sun et al. [6] & 0.227 & 3.800 & 6.910 & 0.414 & 0.801 & 0.913 & 0.950 \\ Goddard et al. [42] & 0.193 & 1.438 & 5.887 & 0.234 & 0.836 & 0.930 & 0.958 \\ DeFNet [16] & 0.174 & 1.296 & 5.857 & 0.233 & 0.793 & 0.931 & 0.973 \\ \hline FLOODCAST w/o flow & 0.084 & 1.081 & 5.536 & 0.196 & 0.920 & 0.963 & 0.980 \\
**FLOODCAST** & **0.074** & **0.843** & **4.965** & **0.169** & **0.936** & **0.971** & **0.984** \\ \hline \hline \multicolumn{8}{c}{Mid term \(k=10\)} \\ \hline & \multicolumn{4}{c|}{Lower is better \(\downarrow\)} & \multicolumn{4}{c}{Higher is better \(\uparrow\)} \\ \hline Method & AbsRel & SqRel & RMSE & RMSE-Log & \(\delta<1.25\) & \(\delta<1.25^{2}\) & \(\delta<1.25^{3}\) \\ Copy last & 0.304 & 5.006 & 8.319 & 0.517 & 0.511 & 0.781 & 0.802 \\ \hline Qi et al. [14] & 0.224 & 3.015 & 7.661 & 0.394 & 0.718 & 0.857 & 0.881 \\ Hu et al. [15] & 0.195 & 1.712 & **6.375** & 0.299 & 0.735 & 0.896 & 0.928 \\ Sun et al. [6] & 0.259 & 4.115 & 7.842 & 0.428 & 0.695 & 0.817 & 0.842 \\ Goddard et al. [42] & 0.211 & 2.478 & 7.266 & 0.357 & 0.724 & 0.853 & 0.882 \\ DeFNet [16] & 0.192 & 1.719 & 6.388 & 0.298 & 0.742 & 0.900 & 0.927 \\ \hline FLOODCAST w/o flow & 0.130 & 2.103 & 7.525 & 0.320 & 0.863 & 0.931 & 0.959 \\
**FLOODCAST** & **0.112** & **1.593** & 6.638 & **0.231** & **0.891** & **0.947** & **0.969** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results for depth forecasting after \(t+k\) on Cityscapes test set, both at short-term and mid-term predictions, i.e. at \(k=5\) and \(k=10\) respectively.
geometric structure of the scene by means of a recurrent network. We believe that FLODCAST is capable of detecting such clues by extrapolating features from past sequences, which also implicitly contains the camera motion, without training a pose estimation network conditioned to specific future frames, like in [16], that clearly limits the application to forecast depths only at corresponding future time steps. We report a slight drop in terms of RMSE at mid-term compared to [15] and [16], however we still achieve concrete improvements in terms of RMSE-Log, by reducing the error of 22%. This indicates that the relative depth consistency is much better preserved by our approach than by the competitors.
Using its recurrent nature, FLODCAST is capable to generate a sequence of depth maps in the future without temporal sub-sampling, i.e. by producing all the intermediate forecasting steps (not only the last one, as done in [16]). In dynamic scenarios, like an urban setting, this is particularly useful, since objects can appear and be occluded several times from one frame to another. Such behavior might not emerge from subsampled predictions.
Some qualitative results are shown in Fig. 3 and 4, respectively for short-term and mid-term predictions. FLODCAST learns to locate the region containing the vanishing point by assigning higher depth values. Moreover, we observed that missing depth map values coming from zeroed values in the ground truth frames are mostly predicted correctly. This underlines that FLODCAST is able to anticipate depth maps up to mid-range predictions while being highly accurate, even though some parts of the scene may not have been labeled, due to bad measurements or missing data.
Figure 3: Visualization results of future predictions on Cityscapes test set at short-term. Black pixels in the ground truth (second column) are invalid measurements.
### Future Flow Estimation
We evaluate optical flow forecasting capabilities on Cityscapes, by following the protocol of [10]. Therefore, we calculate the average end-point error EPE, according to Eq. 11, for the \(t+10\) frame (i.e. \(0.59\) sec ahead), namely corresponding at the \(20\)th frame for each val sequence. We carry out experiments at the resolution \(256\times 512\), by doubling the resolution, and we compare our approach with existing works, FAN [10] and OFNet [9], and some baselines from [10], namely (i) warping the flow field using the optical flow in each time step (namely _Warp Last_) and (ii) simply copying the one last (namely _Copy Last_).
Since our work is capable to provide optical flows for multiple future scenarios, we also assess our performance for every intermediate frames up to \(t+10\), by following the evaluation protocol in [9]. Thus, we measure the quality of our predictions generated autoregressively for each time step, by computing the mean squared error for \(u\) and \(v\) components and averaging them, according to Eq. 10. We report our quantitative results in Tab. 2.
\begin{table}
\begin{tabular}{l|c c c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{MSE \(\downarrow\)} & \multicolumn{8}{c}{EPE \(\downarrow\)} \\ \cline{2-13} & t+1 & t+2 & t+3 & t+4 & t+5 & t+6 & t+7 & t+8 & t+9 & t+10 & t+10 \\ \hline Copy Last [10] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(9.40\) \\ Warp Last [10] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(9.40\) \\ FAN [10] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(6.31\) \\ OFNet [9] & \(\mathbf{0.96}\) & \(0.94\) & \(1.30\) & \(1.40\) & \(1.78\) & \(1.88\) & \(2.16\) & \(2.38\) & \(2.88\) & \(2.66\) & \(2.08\) \\ FLODCAST w/o depth & \(0.98\) & \(\mathbf{0.80}\) & \(1.11\) & \(1.20\) & \(1.38\) & \(1.48\) & \(1.72\) & \(1.78\) & \(2.18\) & \(1.92\) & \(1.48\) \\ FLODCAST (Ours) & \(1.06\) & \(0.84\) & \(\mathbf{1.10}\) & \(\mathbf{1.12}\) & \(\mathbf{1.34}\) & \(\mathbf{1.44}\) & \(\mathbf{1.62}\) & \(\mathbf{1.68}\) & \(\mathbf{2.12}\) & \(\mathbf{1.74}\) & \(\mathbf{1.38}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Qualitative results for flow forecasting on Cityscapes val set. In bold the lowest error. We denote with the symbol “\(-\)” if the corresponding result is not available or reproducible.
Figure 4: Visualization results of future predictions on Cityscapes test set at mid-term. Black pixels in the ground truth (second column) are invalid measurements.
We mainly found that the FLODCAST error drastically decreases over time. This brings us some considerations. Fist of all, FLODCAST combines different modalities, also exploiting spatio-temporal information, and that comes to be crucial to reduce the accumulation error through time. Because optical flow and depth maps are complementary each other, the model can better identify specific patterns, e.g. discriminating object motions at different resolutions in advance (see Fig. 8). This also allows to directly generate multiple future optical flows at a time with a shorter input sequence (i.e. \(T=3\) for FLODCAST while \(T=6\) for OFNet). Moreover, we found a substantial diminishing of the MSE up to \(33\%\) at \(t+10\) and that also supports our observations. Considering that OFNet has more supervision during training, i.e. it forecasts an output sequence shifted by one step ahead with respect to its input, this is the reason we believe performances are sometimes better at the beginning steps but then the error increases compared to FLODCAST.
In absence of intermediate results of MSE for other methods (i.e. FAN, for which no source code and models are available, as denoted in Tab. 2), we compare the overall performance by evaluating the EPE error at \(t+10\), also against the Flow Anticipating Network (FAN) proposed in [10], that generates future flows in a recursive way, by using the finetuned version of their model, which is learned to predict the flow for the single future time step given the preceding frames and the corresponding segmentation images.
We found remarkable improvements even at \(t+10\), by reducing the EPE with respect to FAN and OFNet as well. This highlights our choice that using optical flow with depth maps is better for determining future estimates than with the semantic segmentations employed in FAN. Restricting to observing past optical flows to generate a future one, as done in OFNet, does not allow forecasting models to make reliable long-range predictions autoregressively. Further improvements are obtained when multiple frames are predicted at a time, as FLODCAST does. Then, we demonstrate that FLODCAST is more accurate in predicting unobserved motions far into the future, without requiring semantic data, that is typically harder to get labeled with respect to depth maps, which are directly obtained by using commercial devices like LiDARs or stereo rigs. We also observe that excluding the depth map from FLODCAST, flow performance is reduced, since EPE increases by \(6.8\%\). Despite the hard task of anticipating flow motion without seeing future frames, FLODCAST exceeds all the previous works, and it is more robust when depth is stacked into the input data.
### Ablation Study
In order to understand how significant the flow and depth as data sources are for anticipating the future, we exclude one of the two inputs at a time and we evaluate the performance compared with FLODCAST, which instead leverages both data sources.
Depth AnalysisTo demonstrate the importance of incorporating flow features for depth forecasting, we exclude optical flow from the input and we train FLOD
CAST using the \(\mathcal{L}_{\text{depth}}\) loss (see Eq. 4) to estimate future depth maps.
From Tab. 1 we observe that generating future depth maps through the past ones without leveraging optical flow as source data, i.e. FLODCAST w/o flow, worsens the predictions under all of the metrics. This points out the relevance of combining features extracted from past scenes, in terms of 2D motion and depth. Nonetheless, predicting only future depth maps using our approach, even discarding the optical flow information, gets improvements compared to prior works such as [15; 16]. At short-term \(t+5\) FLODCAST w/o flow is the second best result overall, by reducing the errors by a large margin (e.g. AbsRel and SqRel respectively -53% and -27% from Hu et al, and -52% and -16% from DeFNet) with also higher percentage of inliers. At mid-term \(t+10\) we reported drops of performance of FLODCAST w/o flow still limiting the AbsRel error and getting higher accuracy of inlier pixels. Overall, removing optical flow from the input data, FLODCAST still works better than all the existing works on forecasting unseen scenarios but then the lack of the information affects the performance for farther frames. In addition, we compute the AbsRel error distribution of FLODCAST, when depth maps are predicted through only optical flows (orange bars) or employing our multimodal approach (blue bars) and we plot a histogram at \(t+10\) as function of the distance (Fig. 5).
We found notable improvements within 10 meters when optical flow is part of the input. This is crucial in terms of safety since objects moving around a self-driving agent can be better defined according to their predicted distances. Indeed, from an ego-vehicle perspective, parts of the scene close to the observer are more likely to change over time. Considering that we are forecasting the depth for the whole image, just a few regions move considerably, corresponding to dynamic objects. The rest of the scene, typically the background, like buildings or vegetation, exhibits instead a static behavior and does not change much
Figure 5: Ablation study on depth forecasting in Cityscapes test set. We report the AbsRel error at \(t+10\) per distance (in meters), both when the input data is composed of optical flows and depth maps (blue) or only depth (orange). Note that depth values below 3 meters are not present in the test set.
depth-wise even in presence of ego-motion. Therefore, the depth estimated for those far away pixels contains little error and, consequently, the tails of the two plots tend to be quite similar. Considering that the histogram represents depth errors 10 frames after from the last observed one, our FLODCAST is robust also for long distance when optical flow is part of the input. This also motivates our design choices of sticking data in a multimodal and multitasking approach.
We further provide some qualitative results in Fig. 6, so to underline how the contribution coming from the flow features is significant in generating very accurate depth maps, especially on moving objects, like pedestrians and vehicles. It is noteworthy that 2D motion displacements in the scene help to correctly predict depth values on different moving objects close to each other, e.g. pedestrians crossing the street, whose estimated depths collapse in a unique blob when optical flow is not taken into account. The same happens for cars at different distances from the camera, where their predicted depths look lumped together. That suggests that the model without flow features is less capable of distinguishing single instances.
Flow Analysis.We discard depth maps from the input data and we train the network to predict future optical flows, i.e. by exploiting past flow features, while keeping the same \(\mathcal{L}_{\text{flow}}\) loss (see Eq. 3). We measure the optical flow predictions generated autoregressively for each time step, by computing the mean squared error on both the two flow channels and averaging them (Eq. 10). From the flow forecasting results reported in Tab. 2, we observe that features extracted from both the optical flows and depth maps contribute to
Figure 6: Qualitative results of predicted depth maps of FLODCAST trained with or without optical flows (4th and 5th row respectively). The first two rows are the last observed frame \(I_{t}\) and the future one, \(I_{t+10}\). The third row contains ground truth depth maps for the three samples. Pixel-wise AbsRel errors between FLODCAST w/o flow and our FLODCAST are depicted as heatmap plots in the 6th row for 3 different sequences in the Cityscapes test set.
reduce the MSE errors on predicted flows, resulting in overall improvements after the first steps up to at \(t+10\), i.e. \(+33\%\) over OFNet and \(+9\%\) over FLODCAST w/o depth, which is significant considering the high uncertainty for farther future scenarios. Compared with OFNet, FLODCAST w/o depth has the FlowHead module (as depicted in Fig. 1), in which specialized weights of convolutional layers are end-to-end trained in order to directly generate multiple optical flows at a time. Despite the notable reduction of the error through time, FLODCAST overcomes its performance when depth maps are included in the source data, which points out the importance of our multimodal approach. Looking at the last prediction, i.e. at \(t+10\), FLODCAST w/o depth still exceeds other approaches, but reports an increase of the EPE error by \(+7\%\) with respect to our multimodal approach. This fact suggests that recurrent architectures can achieve good results for forecasting tasks and they can improve if they are multimodal. In addition, we study the EPE error distribution according to distance. To do that, we collect all the predicted flows upsampled to \(256\times 512\) at \(t+10\) on the test set, and we compute the error (see Eq. 11) for all the pixels falling into the corresponding distance-based bins and we represent their averages in Fig. 7. Here, orange bars are errors reported by only using optical flow in input, while the blue ones incorporate also depth maps, i.e. our proposed FLODCAST model.
As can be seen in Fig. 7, the overall trend of EPE is to decrease as the depth increases. This is due to the fact that, parts of the scene far enough from the camera typically produce similar small motion, like objects moving at the background or static parts that are mainly affected by the camera motion, thus the predicted optical flows for such pixels are likely to be more accurate. Instead, pixels closer to the camera tend to have a more pronounced motion and that affects the predictions, especially of farther frames. We observe that EPE
Figure 7: Ablation study on flow forecasting in the Cityscapes test set. We report the EPE error at \(t+10\) according to the distance (meters) of optical flows predicted by FLODCAST, in case of the input data being both optical flows and depth maps (blue) or only optical flows (orange). Note that depth values below 3 meters are not present in the test set.
errors of FLODCAST are always lower when depth maps are provided as input (blue bars) than only using optical flow as unique data source (orange bars). In particular, we gain more within 15 meters, which is the most relevant part of the scene concerning the safety and the drive planning of autonomous agents in very dynamic scenarios like the urban one. FLODCAST with depth maps has the potential to better disambiguate motions of pixels close to the observer than the far ones and vice versa.
Hence, flow forecasting results are more precise as long as the depth map is included in the input data. Based on this consideration, we reported in Fig. 8 some qualitative results on the Cityscapes test set, where we illustrate the ground truth optical flow in comparison with the optical flows obtained from FLODCAST, both exploiting or not the depth map as an additional input source. Finally, we show the heatmaps in the last row of Fig. 8 of the MSE errors with respect to the ground truth as differences between the predictions generated by FLODCAST without depth map and by FLODCAST using both data sources. Specifically, we report enhancements mostly on moving objects, whose shapes are more correctly defined, as shown in the red parts of the cars and the light blue around their shapes.
### Performance details
To take into account the forecasting problem in terms of anticipation, predictions have to be provided early. We therefore analyse the performance of FLODCAST at inference time. We test our model using a single NVIDIA RTX
Figure 8: Flow forecasting qualitative results on the Cityscapes test set. We use FLODCAST trained with or without depth maps (4th and 5th row respectively). The first two rows depict the last observed frame \(I_{t}\) and the future one, \(I_{t+5}\). The third row shows ground truth flows. In the 6th row we depict the difference of MSE errors wrt the ground truth between the predictions of FLODCAST using only past flows and both past flows and depths.
2080. At runtime, FLODCAST requires 8.8GB of GPU memory and it is able to forecast sequences of \(K=3\) consecutive depth maps and optical flows in 40ms (25FPS). Our predictions are estimated for multiple frames ahead simultaneously, which is more efficient than making predictions for a single one, as done in [10; 14; 15].
## 5 Segmentation Forecasting
We now show how FLODCAST can be employed to address downstream tasks such as forecasting segmentation masks. In fact, flow-based forecasting methods have demonstrated that warping past features onto future frames allows producing competitive semantic segmentations [3; 4; 9]. Since FLODCAST predicts dense optical flows in the future, we use the recent lightweight framework introduced in [9], to explore possible improvements on the segmentation forecasting problem as a downstream task through our predictions, in terms of binary instances and semantic categories. To this end, from the whole framework, which also includes a flow forecasting module, named OFNet, we only take MaskNet, which is a neural network that warps binary instances from the current frame onto the future one. Because MaskNet requires future optical flows to warp instances, we replace OFNet with FLODCAST, by only retaining our flow predictions and discarding depth maps.
In order to generate future predictions, both instance and semantic segmentations, we follow the same protocol training in [9]. We first finetune a MaskNet model pretrained on ground truth masks (the MaskNet-Oracle model from [9]), by feeding future optical flows predicted by FLODCAST. We perform separate trainings to make predictions up to \(T+3\) (short-term) and \(T+9\) frames ahead (mid-term)1. We denote these two models as MaskNet-FC. Second, we study how binary instances predicted by MaskNet can be improved. Because we employ predicted optical flow to estimate future binary masks, motion mistakes may affect some pixels of the object to be warped. We also believe that some drops in the performance of MaskNet are due to misleading pixels, that are badly labeled as background instead of instance and vice versa. This effect is more pronounced when an object appears smaller and its predicted flow is not accurate. Inspired by [43], we address this issue by introducing a Denoising AutoEncoder network (shortened to DAE) to the output of MaskNet, so to make binary masks cleaner and to make them as much aligned as possible to the ground truth. The network, depicted in Fig. 9, has an encoder consisting of Conv-ReLU-MaxPool sequences with 32, 64 and 128 filters, and a decoder where Conv-ReLU-UpSample operations are used with 128, 64 and 32 filters. The output is generated after a convolution operation with a single channel,
\(3\times 3\) kernel filter and a sigmoid activation function. At inference, outputs are binarized using a 0.5 threshold.
Because MaskNet warps object instances based on optical flows, the generated masks have to be fed to the DAE to get refined. Therefore, we train the DAE, by using autoregressive flows and freezing MaskNet pretrained weights. Specifically, we train DAE for 3 epochs with a per-pixel MSE loss function with predicted flows up to 3 frames ahead (i.e. \(T+3\), short-term). We observe that using a Dice loss [44] (already employed to train MaskNet), even in combination with the L2 loss, DAE performs worse than with the MSE function. We believe that is due to the fact that further improvements on instance shapes are not always possible with region-based losses (like Dice loss), instead MSE is more suitable to binarize an instance as a whole image. We continue to finetune the DAE for 3 more epochs using the autoregressive flows predicted up to 9 frames ahead (i.e. \(T+9\), mid-term) to adapt the network to less accurate inputs. Doing so, we are able to provide a single autoencoder trained to refine instances, which are generated by MaskNet through autoregressive flows predicted up to 9 frames ahead. Hence, our overall segmentation forecasting architecture, i.e. MaskNet-FC+DAE, is obtained by appending the DAE to the MaskNet mid-term model. This architecture allows to utilize a unique segmentation model to generate future instance segmentation up to 9 frames ahead.
We conduct experiments on the Cityscapes val set, generating future instance and semantic segmentations of 8 different categories of moving objects, both 3 frames and 9 frames ahead (up to about 0.5 sec later) as done in [5], respectively referred to in the literature as short-term and mid-term. We use the mAP and mAP50 metrics for instance segmentation, and mIoU (mean IoU) for semantic segmentation. We show our quantitative results in Table 3.
We report segmentation results achieved by MaskNet [9], using flows predicted by our FLODCAST, also considering the denoising autoebcoder (DAE), proposed to refine warped masks. We compare our results with the original flow-based approach MaskNet [9]. We also report the oracle reference, where a Mask RCNN [45] is used directly on future frames, as well as MaskNet-Oracle whose model is our upper bound flow-based approach since segmentations are warped using ground truth flows. Moreover, we listed the performances of 4 simple baselines and the commonly used F2F approach [5].
Figure 9: Denoising autoencoder (DAE) used to refine the generated future instance segmentation masks. The model is based on a convolutional encoder-decoder structure, where the encoder compresses the input into the latent space and the decoder gradually upsamples the features back to the original image size.
We found that MaskNet, using flows predicted by FLODCAST, improves at mid-term, getting \(+0.5\%\) and \(+2.9\%\), respectively for instance and semantic segmentations compared to the original formulation of [9]. Meanwhile, we observe a negligible drop at short-term, since FLODCAST generates more accurate flows after the first iteration. Because the segmentation performance typically degrades over the time, we pay attention to the impact of appending our DAE at the end of MaskNet to enhance instance and semantic results mainly at mid-term (i.e. 9 frames ahead, 0.5 sec), which is a more challenging scenario than the short-term one. When the DAE is trained to refine instance masks up to mid-term we report a considerable improvement against the F2F approach with a gain of \(+1.3\%\) in AP50 and \(+8\%\) in IoU. Some qualitative results of future instance and semantic segmentation are shown in Fig. 10.
We additionally provide some qualitative results in terms of instance segmentations predicted, by using FLODCAST and MaskNet-FC+DAE, in comparison with the previous framework, i.e. OFNet and MaskNet. We show enhancements on different objects and shapes predicted both at short-term (Fig. 11) and mid-term (Fig. 12), such as the big shapes (like trams and trucks) as well as some details (like car wheels and pedestrians on the ground).
## 6 Conclusions
In this work, we proposed FLODCAST, a novel multimodal and multitask network able to jointly forecast future optical flows and depth maps using a recurrent architecture. Differently from prior work, we forecast both modalities for multiple future frames at a time, allowing decision-making systems to reason at any time instant and yielding state-of-the-art results up to 10 frames ahead on the challenging Cityscapes dataset. We demonstrated the superiority of exploiting both optical flow and depth as input data against single-modality models, showing that leveraging both modalities in input can improve the forecasting capabilities for both flow and depth maps, especially at farther time horizons.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{Short term (T+3)} & \multicolumn{3}{c}{Mid term (T+9)} \\ & AP & AP50 & IoU & AP & AP50 & IoU \\ \hline Mask RCNN oracle & 34.6 & 57.4 & 73.8 & 34.6 & 57.4 & 73.8 \\ \hline MaskNet-Oracle [9] & 24.8 & 47.2 & 69.6 & 16.5 & 35.2 & 61.4 \\ \hline Copy-last segm. [5] & 10.1 & 24.1 & 45.7 & 1.8 & 6.6 & 29.1 \\ Optical-flow shift [5] & 16.0 & 37.0 & 56.7 & 2.9 & 9.7 & 36.7 \\ Optical-flow warp [5] & 16.5 & 36.8 & 58.8 & 4.1 & 11.1 & 41.4 \\ Mask H2F [5] & 11.8 & 25.5 & 46.2 & 5.1 & 14.2 & 30.5 \\ F2F [5] & 19.4 & 39.9 & 61.2 & **7.7** & 19.4 & 41.2 \\ MaskNet [9] & **19.5** & **40.5** & **65.9** & 6.4 & 18.4 & 45.5 \\ \hline MaskNet-FC & 18.1 & 37.8 & 65.4 & 6.7 & 18.9 & 48.4 \\ MaskNet-FC+DAE (Ours) & 18.3 & 39.0 & 65.7 & 7.1 & **20.7** & **49.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Future instance segmentation (AP and AP50) and future semantic segmentation (IoU) of moving objects on the Cityscapes val set. Best results in bold, second best underlined.
We also demonstrated that FLODCAST can be applied on the downstream task of segmentation forecasting, relying on a mask-warping architecture, improved with a refining instance model that boosts mid-range predictions.
Further research will be considered for future developments, which include the usage of a transformer architecture to boost our multitasking model. Other lines of research may also include more performing mask-level segmentation models to be trained end-to-end with a flow forecasting architecture, in order to directly perform the task for multiple frames at a time, in the same sense FLODCAST was designed.
|
2301.13618 | Scheduling Inference Workloads on Distributed Edge Clusters with
Reinforcement Learning | Many real-time applications (e.g., Augmented/Virtual Reality, cognitive
assistance) rely on Deep Neural Networks (DNNs) to process inference tasks.
Edge computing is considered a key infrastructure to deploy such applications,
as moving computation close to the data sources enables us to meet stringent
latency and throughput requirements. However, the constrained nature of edge
networks poses several additional challenges to the management of inference
workloads: edge clusters can not provide unlimited processing power to DNN
models, and often a trade-off between network and processing time should be
considered when it comes to end-to-end delay requirements. In this paper, we
focus on the problem of scheduling inference queries on DNN models in edge
networks at short timescales (i.e., few milliseconds). By means of simulations,
we analyze several policies in the realistic network settings and workloads of
a large ISP, highlighting the need for a dynamic scheduling policy that can
adapt to network conditions and workloads. We therefore design ASET, a
Reinforcement Learning based scheduling algorithm able to adapt its decisions
according to the system conditions. Our results show that ASET effectively
provides the best performance compared to static policies when scheduling over
a distributed pool of edge resources. | Gabriele Castellano, Juan-José Nieto, Jordi Luque, Ferrán Diego, Carlos Segura, Diego Perino, Flavio Esposito, Fulvio Risso, Aravindh Raman | 2023-01-31T13:23:34Z | http://arxiv.org/abs/2301.13618v1 | # Scheduling Inference Workloads on Distributed Edge Clusters with Reinforcement Learning
###### Abstract
Many real-time applications (e.g., Augmented/Virtual Reality, cognitive assistance) rely on Deep Neural Networks (DNNs) to process inference tasks. Edge computing is considered a key infrastructure to deploy such applications, as moving computation close to the data sources enables us to meet stringent latency and throughput requirements. However, the constrained nature of edge networks poses several additional challenges to the management of inference workloads: edge clusters can not provide unlimited processing power to DNN models, and often a trade-off between network and processing time should be considered when it comes to end-to-end delay requirements. In this paper, we focus on the problem of scheduling inference queries on DNN models in edge networks at short timescales (i.e., few milliseconds). By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP, highlighting the need for a dynamic scheduling policy that can adapt to network conditions and workloads. We therefore design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions. Our results show that ASET effectively provides the best performance compared to static policies when scheduling over a distributed pool of edge resources.
## I Introduction
In the last years, we have witnessed the growing popularity of applications leveraging Deep Neural Networks (DNNs), from Augmented/Virtual Reality (AR/VR) to cognitive assistance or video surveillance. The DNN model training process typically does not have strict latency constraints and it is performed _offline_ in well-provisioned centralized data-centers or in a distributed fashion via, e.g., federated learning [1]. Differently, the DNN inference task is usually performed _online_ with constraints in terms of accuracy, throughput, and latency, which may significantly differ across applications. For instance, services like cognitive assistance require high accuracy but may tolerate few hundreds of milliseconds latency, while others, like self-driving cars, have more stringent latency needs (i.e., tens of milliseconds).
Providing an inference service requires to address several challenges to meet this diverse set of application constraints, e.g., the selection of the appropriate variant of the model to be used (programming framework, compiler optimization, batching size, etc.), the processing unit to leverage for the inference (e.g., GPU, CPU, TPU), and the nodes and resources (e.g., memory, computing) to be allocated to every application. This requires management at different timescale. On a short timescale (i.e, milliseconds), a _scheduler_ is in charge of selecting the appropriate computing instance for every new incoming request to meet its application requirements. This includes not only the selection of the computation node but also the appropriate model variant and computation technology. On a longer timescale (i.e., seconds, minutes), an _orchestrator_ selects the proper model variants to deploy, optimizes their placement across the nodes, and allocates the appropriate resources to them. Recent work [2, 3, 4, 5] focused on data centers and proposed DNN inference workload management for such environments. Further, commercial solutions have been deployed in recent years [6, 7, 8] by major cloud providers.
Edge computing is considered a key enabler to deploy DNN-based applications with stringent delay or bandwidth requirements, as it moves computation capabilities closer to end-users with respect to centralized cloud platforms. This is especially the case for users connected via mobile access (e.g. 5G). However, realizing DNN inference at the edge poses several additional challenges. Edge infrastructures are indeed complex networks composed of several layers with heterogeneous limited resources and different latencies to end users [9]. Due to the less availability of resources at edge, multiple inference models of different capacities should be considered, and end-to-end delay requirements may lead to considering a trade-off between network delay and processing time. This differs from centralized cloud platforms, which usually feature large pools of uniform hardware available in a single location where DNN models can be scaled up almost indefinitely. For these reasons, the optimal selection of inference models while scheduling real-time requests at Edge is still a challenging task. Recent work combined edge computing and deep learning [10], with a focus on scheduling requests to minimize end-to-end delay [11] or maximize accuracy [12]. However, none of the existing work analyzes inference workload optimization taking into account different application constraints in realistic edge network settings.
In this paper, we focus on the problem of _scheduling_ DNN inference requests taking into account not only accuracy (i.e., model selection) but also throughput and latency constraints
under realistic edge deployment settings. First, we model our distributed edge inference system and provide a definition of the scheduling problem (Section III), also proposing several baseline static scheduling policies both original and from literature. From evaluating static policies on a realistic network topology, we observe that a policy that always performs better does not exist, as different applications may benefit differently from each scheduling strategy. Based on the insights derived by this analysis we propose ASET1 (Adaptive Scheduling of Edge Tasks), an adaptive scheduling algorithm based on Reinforcement Learning (Section IV), which dynamically follows system conditions and apps requirements optimizing its decisions accordingly. We evaluate ASET simulating three topologies based on the realistic network of a large ISP and using a pool of reference edge applications (Section V). Our findings show that, while some static policies are well suited to optimize workloads on cloud-based topologies, ASET improves performance over any static policy when resources are distributed across the edge network, effectively increasing the percentage of successfully handled queries.
Footnote 1: In ancient Egyptian mythology, Aset was a major goddess said to have power over fate itself.
## II Related Work
The provisioning of on-demand inference services has been investigated in several recent works.
**Inference scheduling in data centers**. Most of the existing solutions address the common scenario where inference queries have to be scheduled over the resources of a Data Center. Some of the main production systems are Tensorflow Serving [6], Azure ML [7], and Cloud ML [8]. Most scientific works focused on proposing algorithms and strategies to improve the performance and ease of use of such cloud inference systems. [2] and [3] address the problem of scheduling Directed Acyclic Graph (DAGs) tasks with the objective of improving the throughput; GrandSLAMm [2] relies on a prediction model that estimates job duration, while [3] proposes an efficient RL approach to select the number of servers to allocate for a given job. Being oriented to a Cloud infrastructure, none of them takes into account network latency between the servers and their heterogeneity. In [13] a Model Master manages the dynamic allocation of DNN models across the servers of a heterogeneous data center based on Azure ML, and proposes a protocol among servers to forward queries to the correct destination. Clipper [4] provides a generalization of TensorFlow Serving [6] to enable the usage of different frameworks. One of the most complete solutions is provided by INFaaS [5], which focuses on ease of use, providing transparent scheduling of incoming queries over available model variants, and autoscaling of deployed models based on load thresholds. However, all the previous works address the scheduling problem only from the boundaries of a data center, considering neither _(i)_ network latency, thus becoming no suitable in scenarios with real-time constraints, nor _(ii)_ resource constrained clusters, thus failing to address situations where workers cannot be indefinitely scaled up/out.
**Inference offloading**. Another related set of works concerns offloading, with a focus on the end-devices. While offloading has been widely studied in the literature [14, 15], the specific use case of DNN workload introduces additional degrees of freedom (e.g., model variant selection and configuration) that can be exploited for improving optimization over the mere selection of the task placement. Some recent works [16, 17, 18] provides intelligent offloading techniques for DNN tasks. DeepDecision [17] addresses the problem in the particular case of a single device running a single application; queries are scheduled among a series of local small models providing different performance/requirements trade-off, and one remote model, which provides the best performance. On the other hand, LinkShare [18] focuses on the orthogonal problem of ordering the offloaded requests from multiple apps on the same device, with the main constraint of network bandwidth. MCDNN [16] proposes a scheduler to handle queries from multiple applications on the same device, deciding _(i)_ the model variant to be used and _(ii)_ whether to offload the inference task or not, seeking average accuracy maximization. Such decisions are taken considering constraints such as latency requirements, device energy, cloud monetary budget.
**Inference and edge computing**. Fewer and more recent are the trends that combine DNN with edge computing [10], with the aim of overcoming scalability and latency limitations of cloud computing. The use of edge computing brings additional challenges deriving from the high resource requirements of DNN based tasks on less powerful edge compute resources. Despite some issues have been addressed in recent works [11, 12, 19, 20], edge-oriented solutions for inference systems are still largely embryonic compared to data center solutions, with many open challenges. CloudPath [19] focuses on the problem of data distribution on a hierarchical continuum of computing resources between edge and cloud. In [20], authors propose an approach to schedule DAGs across multiple edge servers, seeking minimization of end-to-end latency. However, the proposed algorithm assumes the possibility to indefinitely allocate new edge servers when needed, with no geographical restrictions, thus not addressing the problem of constrained resources at the edge. Other works [11, 12] study the problem of processing data streams from scattered devices, exploiting the geographically distributed edge/cloud clusters. In particular, VideoEdge [12] assumes a deployment of cameras generating a known set of video streams, on which various DNN tasks should be performed. The proposed approach decides globally the cluster where each stream should be processed, as well as the model variant to employ and its configuration, considering computation and network bandwidth as constraints and seeking accuracy maximization. However, neither processing nor network latencies are taken as constraints, thus making this approach not suitable for interactive or critical scenarios (e.g., virtual reality, autonomous driving, and more). A similar use case is analyzed in [11], which focuses on minimizing the end-to-end latency processing data flowing from the edge to the cloud. However, it only considers the problem of task allocation, missing the possibility to optimize
properly selecting model variants and their configurations.
To the best of our knowledge, none of the existing works on inference serving systems addresses the problem simultaneously considering _(i)_ end-to-end latency, accuracy, and throughput constraints, _(ii)_ edge-cloud computing and multi-cluster deployment, _(iii)_ real-time job dispatching, _(iv)_ optimization on model variant selection.
## III Scheduling in edge-cloud infrastructure
In this section, we formally define the problem of scheduling inference tasks on a distributed edge-cloud infrastructure. Additionally, we describe a set of static scheduling policies (both original and from literature), that we then use in Section IV as a baseline for our dynamic scheduling approach.
### _System modeling_
**Applications and data-streaming sources.** We consider a set of sources (e.g., end users, IoT devices, vehicles) running a variety of applications (e.g., virtual reality, autonomous driving) each relying on one or more DNN inference tasks. Every application generates _queries_ to be processed, i.e., each query represents the request to perform a specific inference task \(j\in J\) (e.g., object detection, speech recognition) on a given input (e.g., a video frame), where \(J\) is the set of inference tasks supported by the system. Since applications often require more than one query to be processed, we treat sequential queries as streams (e.g., all the frames captured by an AR headset). Therefore, each query \(q\) belongs to a stream \(i\in I\), being \(I\) the entire set of streams currently served by the system. Every query of a stream has a set of requirements such as a maximum end-to-end delay \(D^{i}\), and a minimum required accuracy \(A^{i}\). Additionally, every stream \(i\) has a _data rate_\(\rho_{i}\), that is the number of queries submitted each second (e.g., frame rate), and every query of stream \(i\) has an input of _size_\(\zeta_{i}\) (e.g., frame size). Note that all queries of a stream are for the same task \(j\in J\) with the same requirements.
**DNN Models and Variants.** Every inference task \(j\) can be served using a _Deep Neural Network model_\(m\) among the set of \(M^{j}\) models that are trained for task \(j\). Therefore, the system provides a total of \(N_{m}=\sum_{j\in J}|M^{j}|\) DNN models. Take object detection as an example application. A model \(m\) represents a particular Neural Network architecture with pre-trained weights (e.g., yolo-v3, ssd-mobilenet-v1), and features a given accuracy \(A_{m}\) (mean average precision - mAP). A model \(m\) can be deployed and run through different setups and underlying hardware (e.g., SSD Mobilenet v1 on _(i)_ Tensorflow-GPU with batch size 8, or on _(ii)_ OpenCV-CPU batch size 1 and 2 replicas, and more), thus obtaining a set \(V^{m}\) of different _model variants_. A model variant \(v\) features a given _processing delay_\(D_{v}\), throughput _capacity_\(C_{v}\) (i.e., the maximum number of queries it can process per second), and _resource usage_\(\mathbf{r}_{v}\in\mathbb{R}_{+}^{k}\) (e.g., in terms of CPU, system memory and GPU memory). Note that the processing delay may vary based on the size \(\zeta_{i}\in\mathbb{R}_{+}\) of the input data, thus it is a function \(D_{v}\colon\mathbb{R}_{+}\to\mathbb{R}_{+}\); with \(D_{v}\) we refer to the time needed to process the maximum input size supported by the model (analogous considerations hold for the capacity \(C_{v}\)).
**Network topology and computing clusters.** We consider a geographically distributed cloud-edge infrastructure composed by \(N_{\nu}\) computing _clusters_ (e.g., a centralized data center, a telco regional cloud, an eNodeB) typically organized in a hierarchical topology. Each cluster potentially provides different resources. We denote \(\mathbf{c}_{n}\in\mathbb{R}_{+}^{k}\) the overall capacity of cluster \(n\), with \(c_{nk}\) representing the amount of resource \(k\in\mathbb{N}\) available on cluster \(n\). Examples of resources include CPU, system memory and GPU memory.
Model variants are deployed at different computing clusters consuming a different amount of resources. On a long timescale (i.e., seconds, minutes), an _orchestrator_ selects the appropriate set of model variants to deploy, optimizes their placement across the clusters, and allocates the appropriate resources. Finally, stream sources are connected to a small cluster at the lower layer of the hierarchy. This can be either the antenna/eNodeB in case of cellular communication or the home gateway in the fixed access case. Queries need to be _scheduled_ for processing across model variants available at different computing clusters to meet application requirements on a short timescale (i.e., tens of milliseconds). In the following, we provide a definition of the scheduling problem we tackle in this paper.
### _Scheduling problem definition_
We assume a scheduler is located at the nearest compute cluster available to existing stream sources, i.e., antenna/eNodeB or the home gateway/central office in the fixed access case. It follows every stream source is served by a _scheduler_\(s\) among \(N_{s}\) different ones (one per each lower layer cluster). Each scheduler \(s\) has a given average network delay \(d_{n}^{s}\) towards each cluster \(n\); we also model the associated delay deviation as \(\sigma_{n}^{s}\). Note that an additional access delay from the stream source to the scheduler has to be taken into account (e.g, the radio latency between a device and the nearest 5G antenna). We denote \(\delta_{i}\) the additional access delay that affects stream \(i\). Every scheduler is aware of each model variant \(v\) currently available on each cluster \(n\), each with its current load \(L_{vn}(t)\) (measured in terms of incoming queries per second2). Based on the current conditions of the available
Fig. 1: The scheduler dispatches streams of queries on available model variants based on their constraints and geographical position of clusters.
model variants, for every stream \(i\) it serves, a scheduler \(s\) decides which model variant \(v\) on which cluster \(n\) should be used to process stream \(i\).
When scheduling a stream \(i\) to the proper model variant/cluster, the scheduler takes into account application requirements. Specifically, it considers the stream data size \(\zeta_{i}\), its data rate \(\rho_{i}\), its bit rate \(b_{i}\), the maximum tolerated end-to-end delay \(D^{i}\) and the minimum required accuracy \(A^{i}\), satisfying the following constraints:
_(i)_ the selected model variant \(v\) is a valid implementation of task \(j\) required by \(i\),
\[v\in V^{m}\wedge m\in M^{j}; \tag{1}\]
_(ii)_ the load capacity of the chosen model variant is not exceeded,
\[L_{vn}(t)+\eta_{v}^{i}\rho_{i}\leq C_{v}, \tag{2}\]
being \(\eta_{v}^{i}\) the fractional load of stream \(i\) for model variant \(v\);
_(iii)_ the sum of expected network delay and processing time does not exceed the maximum tolerated delay,
\[2(\delta_{i}+d_{n}^{s}+2\sigma_{n}^{s})+b_{i}\zeta_{i}+D_{v}(\zeta_{i})\leq D^ {i}, \tag{3}\]
where the first addendum is the round-trip propagation time, the second is the transmission delay for one query and the third is the time needed to process the query;
_(iv)_ the selected model provides an adequate accuracy
\[A_{m}\geq A^{i}. \tag{4}\]
A graphical representation of the scheduling problem is depicted in Figure 1, while a scheduling policy can be formally defined as follows.
**Definition 1**.: _(scheduling policy). Let us consider a stream \(i\) to be processed through a task \(j\) on an edge-cloud infrastructure that features a set of \(V^{m}\) compatible model variants over \(N_{\nu}\) clusters (\(|N|=N_{\nu}\)). A scheduling policy is any function_
\[\beta\colon I\to V^{m},N \tag{5}\]
_that binds stream \(i\) to a feasible model variant \(v\in V^{m}\) deployed on cluster \(n\in N\), so that constraints at Equations (1), (2), (3), and (4) are satisfied._
Note that, as requests are handled in real-time, scheduler decisions should be taken in an amount of time that is negligible compared to the stream latency requirements.
**Scheduling performance metrics and objectives.** Based on the scheduling decisions, in a given time instant \(t\) the stream \(i\) will feature a _reject ratio_\(q_{i}^{R}(t)\in[0,1]\), i.e., the fraction of queries from stream \(i\) that have not been processed by the system because of resource unavailability, and a _failure ratio_\(q_{i}^{F}(t)\in[0,1]\), i.e. the fraction of queries that have been served violating one or more application requirements (i.e., delivered out of maximum tolerated delay).
The goal of the scheduler is typically to maximize, over time, the fraction of queries that are served successfully, i.e., to minimize the sum of reject ratio and failure ratio.
### _Static scheduling policies_
Several policies have been proposed for static scheduling of inference tasks on edge clusters [9, 21]. In this work we consider the following ones (both original and from literature):
_1) closest:_ bind stream \(i\) to any feasible model variant \(v^{*}\) located on the cluster \(n^{*}\) that features the lower network latency to serving scheduler \(s\), i.e., \(n^{*}=\operatorname*{arg\,min}_{n\in N}\left(d_{n}^{s}+2\sigma_{n}^{s}\right)\). This policy may lead to the early saturation of smaller clusters at the very edge, as they are always preferred [22].
_2) load balancing:_ bind the input stream to model variant \(v^{*}\) on cluster \(n^{*}\) such that \((v^{*},n^{*})=\operatorname*{arg\,min}_{v,n\in V^{m}\times N}L_{vn}(t)\). This policy can bring huge performance gains compared to _closest_[22]; however, it may lead to unfair allocation when latency-sensitive applications are in the minority.
_3) farthest:_ bind stream \(i\) to any feasible model variant \(v^{*}\) located on the cluster \(n^{*}\) with the highest (still feasible) network latency, i.e. \(n^{*}=\operatorname*{arg\,max}_{v\in N}\left(d_{n}^{s}+2\sigma_{n}^{s}\right)\). As opposed to _closest_, this policy preserves smaller clusters at the very edge for those apps that really need them [23]; however, it is highly affected by the unreliability of network delay for long distance communications.
_4) cheaper:_ bind stream \(i\) to model variant \(v^{*}\) on cluster \(n^{*}\) such that the expected end-to-end delay (round-trip and processing time) is maximized, i.e., \((v^{*},n^{*})=\operatorname*{arg\,min}_{v,n\in V^{m}\times N}\left(2(d_{n}^{s }+2\sigma_{n}^{s})+D_{v}(\zeta_{i})\right)\). We designed this policy as an improvement over _farthest_, as it additionally tries to preserve the most performing model variants.
_5) random-proportional latency:_ bind stream \(i\) to model variant \(v\) on cluster \(n\) with probability \(1/(2(d_{n}^{s}+2\sigma_{n}^{s})+D_{v}(\zeta_{i}))\). This guarantees that, on a large enough number of streams, bindings are proportionate to end-to-end delays [21].
_6) random-proportional load:_ bind stream \(i\) to model variant \(v\) on cluster \(n\) with probability \(C_{v}/L_{vn}(t)\). This guarantees that, on a large enough number of streams, bindings are proportional to the capacity of each model variant.
_7) least impedance:_ bind stream \(i\) to model variant \(v^{*}\) on cluster \(n^{*}\) such that end-to-end latency to \(s\) is minimized, i.e., \((v^{*},n^{*})=\operatorname*{arg\,min}_{v,n\in V^{m}\times N}\left(2(d_{n}^{s }+2\sigma_{n}^{s})+D_{v}(\zeta_{i})\right)\)[21]. This greedy policy leads to the best performance when the overall load is low, but may suffer from a high rejection rate once the closest and fastest model variants are saturated.
Our experiments (Section V) show that, for a heterogeneous pool of applications, a policy that always performs better than the others does not exists: different applications may benefit differently from each scheduling strategy, and also the physical topology and the particular streams arrivals can be determinant. Based on these findings, in the next section we propose ASET, an algorithm for Adaptive Scheduling of Edge Tasks that leverages Reinforcement Learning to optimize its decisions dynamically based on the current system conditions.
## IV ASET Scheduling Algorithm
Our adaptive scheduling approach aims to learn the optimal policy depending on current system conditions, e.g, current applications, network topology, and stream arrivals that vary over time. Due to the lack of labeled data, the optimal
policy learning is formulated as a Reinforcement Learning (RL) problem; hence, an intelligent agent tries to learn the optimal policy selection strategy according to the observed state of the environment. This is accomplished by an RL policy that estimates a probability distribution of each possible action (policy selection) that cumulatively maximizes a reward (typically maximizing the fraction of queries that are served successfully), as shown in Figure 2.
Let us consider a learner and decision-maker called the _agent_, and an _environment_ that is the external world that the agent interacts with at discrete time steps \(t\). Given \(S_{t}\in S\), where \(S\) is the set of possible _states_ of the environment, the agent can select an _action_\(A_{t}\in A(S_{t})\), standing for the set of available actions in state \(S_{t}\). The agent receives an observation of the environment \(S_{t}\) at time \(t\) and, one step later, a numerical _reward_, \(r_{t+1}\in R\subset\mathbb{R}\), and it jointly determines the action \(A_{t}\) to perform, which, in part, yields to next state \(S_{t+1}\).
**Definition 2**.: _(stochastic reinforcement learning policy). An RL policy \(\pi_{\phi}\), where \(\phi\in\mathbb{R}^{d}\) denotes policy parameters, is any function or algorithm that determines and maps the next action to take by an agent. A stochastic RL policy, additionally, estimates a probability distribution over actions that an agent can take at a given state:_
\[\pi_{\phi}\colon\;A\,x\,S\to[0,1], \tag{6}\]
\[\pi_{\phi}(a|s)\stackrel{{\mathrm{def}}}{{=}}\mathbb{P}(\text{take action $a|$given state $s$}).\]
Overall, the goal of the proposed adaptive scheduling is to learn an optimal sequence of static network scheduling policies that maximizes the percentage of successfully dispatched streams. At a \(T\) seconds rate, the RL-based scheduler samples the environment by collecting a variety of observations from the edge-cloud infrastructure, e.g., responses and loads, building up the current state \(S_{t}\) of the environment. Then, the agent evaluates a discrete set \(A\) of actions and chooses an action \(A_{t}\in A\), where \(A\) stands in this work for the set of available network scheduling policies \(\beta\). Note that the set of actions does not depend on the state itself, thus the sets \(A(S_{t})=A\) are the same (Section III-C). Therefore, every time that the agent takes an action \(A_{t}\), the state of the environment \(S_{t}\) is observed and a reward score \(r_{t+1}\) is used as feedback information to improve the policy selection, see Figure 2. In this work, these rewards are defined as a linear combination of the ratio of "failed" queries and the ratio of queries that have been "rejected" for lack of available resources (Section IV-C).
The particular policy \(\beta_{t}\), selected by the agent at time \(t\), is used to dispatch all incoming streams during the subsequent time window \([t,t+T]\). Therefore, given the corresponding states sequence \(\mathbf{S}=[S_{0},S_{T},S_{2T},...,S_{kT}]\) with \(k\in\mathbb{N}\), the resulting overall scheduling policy \(\beta(\mathbf{S})=[\beta_{0},\beta_{T},\beta_{2T},...,\beta_{kT}]\) dynamically maps, with the corresponding baseline policies \(\beta_{t}\), a stream \(i\) to a model variant \(v\) and its deployment on cluster \(n\). From now, and for the sake of simplicity, we will refer as \(\pi\) to the policy learned by the ASET agent (Definition 2), which leads to a particular static policy sequence \(\beta(\mathbf{S})\). It corresponds to any function employed to estimate the optimal sequence of actions that the agent should perform at each time window \([t,t+T]\) and given a state \(S_{t}\), \(\beta(\mathbf{S})=[A_{0},A_{T},A_{2T},...,A_{kT}]\). The intuition of this behavior is provided in Figure 3. Note that each of the static scheduling policies from Section III-C corresponds to a deterministic agent that always returns the same action \(A_{t}\) independently of the system state; whereas the policy \(\pi\) learned by the ASET agent can be seen as a meta-policy (or as a policy of baseline scheduling strategies) that also satisfies the constraints from Equations (1), (2), (3), and (4).
### _Deep Q-Learning policy optimization_
Our RL agent has to cope with a discrete set of actions, with \(A\subset\mathbb{N}\). This is often modeled in literature as a stochastic process with no memory, which is a Markov Decision Process [24] (MDP). In this work, our MDP defined by tuples \((S,A,\mathcal{T},\mathcal{R},\gamma)\) represents states comprised of partial observations from the system. Nonetheless, the model parameters of such MDP are unknown, i.e., the transition probabilities \(\mathcal{T}(s^{\prime}|a,s)\) and the rewards \(\mathcal{R}(s^{\prime}|a,s)\) of taking the action \(A_{t}=a\) and moving from state \(S_{t}=s\) to state \(S_{t+1}=s^{\prime}\). Note that the ASET agent should experience each transition among states at least once, or even multiple times to get a reliable estimation of both transition and cumulative rewards.
At each step \(t=kT\), with \(k\in\mathbb{N}\), the RL agent can choose one of several possible scheduling policy-actions, \(\beta_{t}\equiv A_{t}\). The transition probability \(\mathcal{T}(s^{\prime}|a,s)\) depends in part on the chosen action, and, additionally, from some positive or negative reward that may be returned by every state transition, named _return_ of actions. Overall, our objective is to find a strategy, i.e., a policy \(\pi\) mapping to a sequence \(\beta(\mathbf{S})\), that maximizes the expected return \(G(t)\) of rewards over time.
Fig. 3: The ASET RL agent infers the optimal policy sequence based on the system conditions, seeking an optimal binding between workloads and model variants that maximizes the percentage of success queries. Plots show two runs on a cloud-based topology and on an edge-based one (see Section V).
Fig. 2: Algorithm overview. State \(S_{t}\), sampled from the environment, is forwarded through the agent DNN, which outputs action \(A_{t}\); performing \(A_{t}\) on the environment contributes to reward \(r_{t+1}\) obtained at the next step.
Thus, \(G(t)\) is defined in terms of the cumulative weighted rewards along with states and given the corresponding optimal sequence of actions to take in the future:
\[G(t)=\sum_{\tau=0}^{H}\gamma^{\tau}r_{\tau}\qquad\gamma\in[0,1], \tag{7}\]
where \(r_{\tau}=\mathcal{R}(s^{\prime}|a,s)\) is the reward at time step \(\tau\) due to corresponding state transition \((s,s^{\prime})\), \(\gamma\) is a weighting factor that reduces the contribution of long-term rewards, usually known as the discount factor, and time \(H\) is the last time step within a training episode (see Section IV-C for further details). Therefore, the RL agent's target policy is
\[\pi^{*}(a|s)=\operatorname*{arg\,max}_{\pi_{\phi}}\mathbb{E}_{t^{*}\sim\pi_{ \phi}}\left\{G(t)\right\}, \tag{8}\]
which translates the scheduler state into a distribution over actions, see Definition 2. Note that the expectation is computed over the distribution of trajectories \(t^{*}=(s_{0},a_{0},s_{1},...)\).
In Q-Learning, the optimal pair values \((s,a)\), i.e., those yielding to the sequence of optimal actions, are generally called Quality-Values (Q-Values) and noted as \(Q^{*}(s,a)\)[25]. They correspond to the sum of weighted rewards that the RL agent can expect on average after performing action \(a\) on state \(s\). It is also known as the _expected return of actions_,
\[Q(s,a)=\mathbb{E}_{t^{*}\sim\pi_{\phi}}\left\{G_{t}|S_{t}=s,A_{t}=a\right\}. \tag{9}\]
Bellman [24] showed that if an agent's trajectory follows the highest Q-Values, then its policy is optimal and leads to the highest \(G(t)\) as well. Bellman also reported that an accurate estimate of Q-Values can be found recursively by using the _Bellman Optimality Equation_, also known as the Value Iteration algorithm. In fact, Q-Learning is an adaptation of Bellman's value iteration algorithm, where a policy is implicitly, or off-line, learned by following trajectories yielding to the highest Q-Values [25]. It is usually computed by dynamic programming and assumes that the optimal value of state \(S_{t}=s\) is equal to the reward it will get on average, after taking one optimal action \(a\) and adding the expected optimal value of all possible next states along the future path of decisions, that is \(Q(s,a)=\mathbb{E}_{\pi}\left\{r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{\prime })|s,a\right\}\). Equation (9) turns out into the following iteration algorithm, which converges to the optimal \(Q^{*}(s,a)\),
\[Q_{k+1}(s,a)\leftarrow\sum_{s^{\prime}}\mathcal{T}(s,a,s^{\prime})\big{[}r+ \gamma\max_{a^{\prime}}Q_{k}(s^{\prime},a^{\prime})\big{]}, \tag{10}\]
for all \(s^{\prime}\in S\), \(a^{\prime}\in A\) and \(k\in\mathbb{N}\) as iteration step. For simplicity, we set the transition probability matrix \(\mathcal{T}\) to all elements equal to \(1\), allowing initial transitions among all seen states. Once Q-Values are estimated, the optimal policy \(\pi^{*}\) for the RL agent corresponds to chose the action that has the highest Q-Values: \(\pi^{*}(a|s)=\operatorname*{arg\,max}_{\pi}Q_{\pi}(s,a)\), for all \(s\in S\) and \(a\in A\equiv\beta\) static policies in Section III-C.
However, previous algorithm does not scale for large MDPs with a large number of states. A solution is to approximate the optimal \(Q^{*}(s,a)\) using a Deep Neural Network, named Deep Q-Network (DQN) [26], to get an estimate \(Q(s,a;\phi)\approx Q^{*}(s,a)\), where \(\phi\) stands for the parameters of the DQN model, see line 16 in Algorithm 1. The using of a DQN for approximate Q-Learning is known as Deep Q-Learning.
### _State Encoding_
We model the state in a continuous fashion, representing the environment in a given time \(t\) as a set of some particular features sampled from the system and averaged along a time window of size \(T\). Features are evaluated separately for each available worker \(w\in W\),3 and are as follows: _(i)_ the number \(|I_{w}|\) of streams currently served by worker \(w\), being \(I_{w}=\{i\in I|\ \beta(i)=(v,n)\}\); _(ii)_ the current throughput \(R_{w}(t)\) of worker \(w\), in terms of responses delivered at the time instant \(t\); _(iii)_ the current load \(L_{w}(t)\), measured in terms queries per second normalized on input size (as defined in Section III-B); _(iv)_ number of incoming instant queries grouped by stream characteristics, e.g., queries of all streams that require end-to-end delay within a given range \([\delta^{1},\delta^{2}[\) and features a data rate in the interval \([\rho^{4},+\infty[\), i.e., \(\sum_{i\in I_{1,4}}\rho_{i}\), where \(I_{1,4}=\{i\in I|\ \ D^{i}\in[\delta^{1},\delta^{2}[\wedge\rho_{i}\in[\rho^{4},+\infty[\) ). In particular, we consider a partition \(0=\delta^{0}<\delta^{1}<\delta^{2}<...<\delta^{N_{\delta}-1}\) of \(\mathbb{R}_{+}\) with \(N_{\delta}\) delay intervals, and a second partition \(0=\rho^{0}<\rho^{1}<\rho^{2}<...<\rho^{N_{\rho}-1}\) of \(\mathbb{R}_{+}\) with \(N_{\rho}\) input-rate intervals, evaluating \(N_{\delta}\cdot N_{\rho}\) different sum of instant queries, that is one feature for each combination of the two partitioning sets. The features defined so far constitute a vector as,
Footnote 3: A worker is a model variant instance \(v\) running on a particular cluster \(n\), therefore we can assume index \(w=n\cdot v+v\).
\[\mathbf{S}_{w,t}=\left[|I_{w}|,R_{w}(t),L_{w}(t),\sum_{i\in I_{0,0}}\rho_{i}, \ldots,\hskip-11.381102pt\sum_{i\in I_{N_{\delta}-1,N_{\rho}-1}}\rho_{i}\right] \tag{11}\]
where \(\mathbf{S}_{w,t}\in\mathbb{R}_{+}^{3+N_{\delta}\cdot N_{\rho}}\). Therefore, the complete state \(\mathbf{S}\) is modeled as a three-dimensional vector in \(\mathbb{R}_{+}^{(3+N_{\delta}\cdot N_{\rho})\times|W|\times T}\), that is, each feature in (11) is first evaluated for each available worker (each model variant on each node), and then for each time instant within the considered time window. For instance, vector \(\mathbf{S}_{w}\) stores, for worker \(w\), every features in (11) evaluated at every time instant \(t-T+1\), \(t-T+2\),..., \(t\) within time window \([t-T+1,t]\). From now, we refer to the state vector encoding as simply \(s\) or \(s_{t}\) for a generally speaking state or a state referred to a time window, respectively.
### _Training_
The proposed RL scheduling agent is trained over a series of episodes that resemble various scenarios. Each episode corresponds to a different workload execution with given parameters, e.g. requirements from tasks, number of clients per minute (\(\lambda\)) or the seed value (\(\zeta\)) for random number generation (RNG), and is concluded when the percentage of success queries, \(q_{t}^{S}\), falls below a given threshold \(\theta\) or when a timeout \(H\) is reached. This allows us to speed up the training by terminating unsuccessful or steady episodes quickly. At every time step \(t\), a reward \(r_{t}\) scores the rate of successful queries, see Algorithm 1 at lines 9-10, where
is the ratio of "failed" queries, i.e., those delivered violating one or more constraints (e.g., out of tolerated delay), and \(q_{t}^{R}\) is the ratio of queries "rejected" by the system for lack of resources, normalized by corresponding time window. \(\psi\) is a penalty inverse proportional to the episode active time. It ensures that both short and bad action trajectories do not reach higher returns than optimal ones. Note that DQN network is used to minimize the target loss L, see lines 14-18, by Adam optimizer and \(\alpha\) learning rate. It takes gradient steps on the Bellman error objective L, see factor \(C\) at line 16 and Eq. (10), concurrently with data collection from the replay buffer [27] for an efficient off-line learning. This is a common hybrid approach to implement Q-Learning [25, 28, 29]. Additionally, we employ an \(\epsilon\)-greedy exploration policy, see line 5, with parameter \(\epsilon\) dynamically updated. The architecture of our DQN consists of a stacking of convolutional layers that extracts temporal correlations from the state tensor \(\mathbf{S}\). Such feature extraction part is composed of three convolutional layers with 4x4 kernels along the time and feature dimensions, followed by Re-Lu activation and max-pooling. Finally, two linear layers squeeze the dimension from 256 to as many outputs as different static policies \(\beta\).
## V Performance evaluation
We evaluate ASET using a prototype implementation of an edge inference system that will be released upon acceptance of the paper. We first use our prototype to run small scale experiments with the aim of profiling some representative models and their variants (results not shown). Then we use such profiling data to run large scale experiments on a simulated setup, comparing the performance of ASET to those of static scheduling policies.
### _Evaluation settings_
**System Prototype.** Our prototype implements the edge inference system functionalities described in Section III. On each cluster, a _Master_ deploys workers and routes streams between them and remote clients; each _Worker_ runs in a Docker container and implements a pipeline that processes queries in FIFO order from different streams, based on the model variant batch size; a _Monitoring_ agent on each cluster collects stats from model variants usage and their performance, used _(i)_ to build a catalog of model variants and _(ii)_ to provide each _Scheduler_ with aggregated observations on the system state. We use such a prototype to profile variants of pre-trained inference models with respect to their resource usage and performance (see below).
**Simulation Setup.** To evaluate our approach on a large scale, we set up a simulated environment where each worker simulates the inference task based on the profiling information available for its model variant. Therefore, empty responses are generated for each batch of queries after simulating a processing delay (based on a normal distribution). Additionally, we simulate network delay between stream sources and destination clusters (see below for considerations on the network topologies), as well as the transmission delay. Apart from simulated workers, other system components are deployed using their prototype implementation. Therefore, the system operates on a realistic timescale.
**Network topology.** We leverage the network topology of a large ISP to assess scheduling performance under realistic settings. Specifically, our simulated environment is a cloud-to-edge topology with clusters of different sizes deployed hierarchically. To preserve ISP confidentiality, we only report a high-level summary of topology, latency, and hardware distribution characteristics. Similarly to the tiered topologies from [30, 31], our topology can provide clusters with computation capabilities at different layers: network access (e.g., antennas, home gateway), central offices (multiple payers), operator data center, and remote cloud (third parties). Specifically, we focus on three scenarios: _(i)__dc-cloud_, where resources are deployed at ISP data center and remote cloud only; _(ii)__co-dc-cloud_, where resources are deployed at central offices, operator data center and remote cloud; _(iii)__full-edge_ topology, where clusters are deployed at all layers previously mentioned. Note that we limit the simulations to the topology serving 1,000 antennas from the full ISP topology, and appropriately scale resources (see below). For the evaluation, we assume a 5G radio access technology with antennas deployed similarly to LTE. Network/transmission delays range from few milliseconds, to reach the eNodeBs behind the antennas, to the order of ten milliseconds for central offices and ISP data centers, and few tens of milliseconds for the remote cloud.
**Requests workload.** Requests are generated following a Poisson distribution. Each generator runs on average \(\lambda\) clients per minute querying the scheduler of a given geographical area (antenna). Once spawned, each client requests for processing a stream featuring randomized characteristics in terms of frame rate, required end-to-end latency, required model accuracy, frame sizes, stream duration. To capture realistic queries characteristics, we modeled metrics of generated streams according to the reference edge applications in Table I. In our settings, a generator with \(\lambda\) = 60 brings a load of almost 1000 queries per second on the serving antenna.
**Computing clusters and model variant.** We assume a given
reference hardware distribution across clusters, with computing capabilities increasing from the access network to the cloud. Specifically, the access network can be equipped with an 8-16 cores machine, 16 GB of memory and a small TPU, central offices can host in the order of tens of servers (32-64 CPUs, 128-256 GB, and few GPUs), ISP data centers can host hundreds of servers, while for the centralized cloud we assume unlimited resources. In our evaluation, we focus on DNN models for the object detection task, as it is one of the most challenging and computation-intensive inference service [34, 35]. Using our prototype we profile MobileNet-SSD, Yolo-v3, and Tinyolo-v2 models [36, 37], with CPU and GPU variants on different batch sizes, scaling on allocated resources and number of replicas. Such a set of results is not shown for lack of space. We use profiled information to run our simulations on top of the three topologies described above. On each cluster, workers have been scaled on the number of replicas up to resource saturation.
### _Experimental Results_
We compare the performance of the baseline policies described in Section III-C distinguishing results for different applications from Table I. As a performance metric we consider the percentage of queries that are successfully processed by the system satisfying the application QoS requirements. Figure 3(a) shows results of multiple runs with \(\lambda\) = 60. Results suggest that there is no one-size-fits-all policy, as various applications may benefit differently from each policy. Varying the rate of stream requests on the antenna (Figure 3(b)) may further increase the uncertainty of relying on a single policy. In the following, we compare the performance of the ASET RL scheduling approach with the performance of static policies, evaluating the benefits it can introduce in the various scenarios. We trained three different versions of ASET (one for each topology). In particular, we sample the state using a time window \(T\) = 25 seconds, and we experimentally chose an episode timeout of 8 minutes to avoid steady states in the network. Despite we evaluate on multiple clients rate, our agent has been trained only on episodes with \(\lambda\) = 60.
**Cloud deployment.** When all the available resources are located in a few centralized clusters, the various static policies have small differences in performance and a dynamic approach has little room for improvement. Results for the dc-cloud topology are shown in Figures 4(b). In particular, Figure 4(a) plots, for every moment of the simulation (time axis), the percentage of queries that are handled successfully, averaging multiple runs with different workloads. The graph shows that, for this topology, ASET does not improve over static policies, and it even performs worse for higher lambdas (Figure 4(b)). Figures 4(c) shows that moving some resources to Central Offices (co-dc-cloud topology) makes a huge difference: in general, all the policies achieve a higher success ratio on this configuration (Figure 4(c)), as they can exploit the additional lower latency spots, and the higher level of distribution gives to ASET a certain margin of improvement. Figure 4(d) shows that ASET introduces some improvement over all the baselines for every lambda, despite being trained only for \(\lambda\) = 60.
**Edge deployment.** The results so far suggest that a good distribution of computing resources is a key factor to improve against static scheduling policies. As shown in Figure 6, the benefits of using a dynamic scheduling approach become more
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline
**Edge app** & **Tolerated** & **Frame** & **Streams** & **Required** \\ & **delay** & **rate** & **duration** & **accuracy** \\ \hline Pool & 95 ms & 5 FPS & 5-10 s & 10 mAP \\ Workout Assistant & 300 ms & 2 FPS & 90 s & 10 mAP \\ Ping-pong & 150 ms & 15-20 FPS & 20-40 s & 15 mAP \\ Face Assistant & 370 ms & 5 FPS & 1-5 s & 30 mAP \\ Lego/Draw/Sandwich & 600 ms & 10-15 FPS & 60 s & 25 mAP \\ Gaming & 20-30 ms & 25 FPS & 10-30 m & 35 mAP \\ Connected Cars & 150 ms & 10-15 FPS & 15-30 m & 40 mAP \\ Tele-Robots & 25-35 ms & 10 FPS & 5 m & 40 mAP \\ Remote-driving & 20-30 ms & 20 FPS & 15-30 m & 50 mAP \\ Interactive AR/VR & 30-50 ms & 25 FPS & 30-60 s & 35 mAP \\ \hline \hline \end{tabular}
\end{table} TABLE I: Characteristics of reference applications [32, 33].
Fig. 4: Success percentage for different apps on the full-edge topology.
Fig. 5: Performance of ASET compared with static policies for (ab) the dc-cloud topology and (cd) the co-dc-cloud topology.
concrete in a full-edge topology, where resources are better distributed on multiple smaller clusters in different locations. In fact, Figure 5(a) shows that the dynamic approach of ASET is able to achieve a constant improvement over any static policy, with a higher success ratio over time. In particular, Figures 5(d) show that, while maintaining the same rejection rate as the best static-policy, ASET effectively reduces the number of queries that are handled violating one or more QoS requirements. Moreover, Figure 5(b) shows that an ASET agent trained only for \(\lambda\) = 60 can also generalize on different requests rate, even supporting a load of more than 1600 queries per second (\(\lambda\) = 100) on a single antenna.
**Dynamic input rate.** We have performed some additional experiments to evaluate how the system behaves in dynamic situations where the requests rate varies over time. For this purpose, we have set up some dynamic runs where the lambda value changes every 150 seconds: a first pattern simulates a particularly fast variation with values of 20, 60, and 100 clients per minute (Figure 6(a)); a different pattern simulates a more steady scenario where the requests rate first moves from 60 to 40 clients per minute, then drops to 20, and finally slowly goes back to 60 (Figure 6(b)). Similar to previous plots, the outcomes for this set of experiments are shown averaging values over time for multiple runs (Figure 7). Results in both figures show that having a dynamic requests arrival even introduces a bigger margin for improvement that ASET effectively exploits reaching the highest percentage of queries handled successfully. This appears particularly evident in the case where the variation between client arrivals is faster and bigger (Figure 6(a)). This result suggests that, while some of the static policies may achieve decent performance when the system load is stable, they struggle on more dynamic scenarios. In such situations, an adaptive algorithm such as ASET is more suitable as it can learn how to best optimize the system under different conditions. Moreover, results suggest that ASET training generalizes enough as the algorithm performs well under previously unseen dynamic conditions.
**Training and applicability.** Figure 7(a) shows the cumulative distribution of the time needed by ASET to infer a switching decision from the current policy to the one that is best suitable for the current system conditions (non-dashed line). The switching delay is compared with distributions of the intervals between subsequent requests for different lambdas. As shown, even for very large client loads (100 clients connected to the antenna), the time interval between two stream arrivals is typically in the scale of seconds or hundreds of milliseconds, while the delay for switching between policies is one order of magnitude smaller. Finally, Figure 7(b) shows the learning curve of ASET for different topologies on continuous stream request arrivals. The figure shows that ASET quickly reaches a certain level of performance in the first training iterations (before 100 episodes) independently from the topology complexity, leaving room for extra improvements to the subsequent episodes based on the margin left by the topology itself.
## VI Conclusions
This paper proposes ASET, an adaptive algorithm based on Reinforcement Learning for scheduling inference workloads at the network edge. ASET solves the problem of exploiting scattered clusters of resources to serve inference queries from multiple edge applications (e.g., AR/VR, cognitive assistance). We model an edge inference system where queries from different access networks are processed across a multitude of distributed processing locations. The constrained nature of the edge network introduces a trade-off between network delay and processing time based on the various available DNN models. In such a scenario, ASET optimizes the binding between inference stream requests and available DL models
Fig. 8: (a) Delay for switching policy compared with requests arrival intervals. (b) Learning curve while training ASET on different topologies.
Fig. 6: Performance of ASET compared with static policies for the full-edge topology. (a) (c) and (d) show averages of multiple runs with \(\lambda\) = 60.
Fig. 7: Performance of ASET varying the requests rate over time with two different load variation patterns (full-edge topology).
across the network, maximizing the throughput and ensuring that any requirement in terms of inference accuracy and end-to-end delay is satisfied. We evaluated our approach over the realistic network topology of a large ISP and considering a heterogeneous pool of edge applications. Our findings show that ASET effectively improves the performance compared to static policies when resources are deployed across the whole edge-cloud infrastructure.
|
2309.16142 | On the Steenrod module structure of $\mathbb{R}$-motivic
Spanier-Whitehead duals | The $\mathbb{R}$-motivic cohomology of an $\mathbb{R}$-motivic spectrum is a
module over the $\mathbb{R}$-motivic Steenrod algebra
$\mathcal{A}^{\mathbb{R}}$. In this paper, we describe how to recover the
$\mathbb{R}$-motivic cohomology of the Spanier-Whitehead dual $\mathrm{DX}$ of
an $\mathbb{R}$-motivic finite complex $\mathrm{X}$, as an
$\mathcal{A}^{\mathbb{R}}$-module, given the $\mathcal{A}^{\mathbb{R}}$-module
structure on the cohomology of $\mathrm{X}$. As an application, we show that 16
out of 128 different $\mathcal{A}^{\mathbb{R}}$-module structures on
$\mathcal{A}^{\mathbb{R}}(1):= \langle \mathrm{Sq}^1, \mathrm{Sq}^2 \rangle$
are self-dual. | Prasit Bhattacharya, Bertrand J. Guillou, Ang Li | 2023-09-28T03:44:24Z | http://arxiv.org/abs/2309.16142v2 | # On the Steenrod Module Structure of \(\mathbb{R}\)-Motivic Spanier-Whitehead Duals
###### Abstract.
The \(\mathbb{R}\)-motivic cohomology of an \(\mathbb{R}\)-motivic spectrum is a module over the \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}\). In this paper, we describe how to recover the \(\mathbb{R}\)-motivic cohomology of the Spanier-Whitehead dual DX of an \(\mathbb{R}\)-motivic finite complex X, as an \(\mathcal{A}^{\mathbb{R}}\)-module, given the \(\mathcal{A}^{\mathbb{R}}\)-module structure on the cohomology of X. As an application, we show that \(16\) out of \(128\) different \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1):=\langle\mathrm{Sq}^{1},\mathrm{Sq}^{2}\rangle\) are self-dual.
Guillou was supported by NSF grant DMS-2003204 Bhattacharya is supported by NSF grant DMS-2305016
the \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}\) on the Spanier-Whitehead duals of those finite \(\mathbb{R}\)-motivic spectra whose cohomology is free over \(\mathbb{M}^{\mathbb{R}}_{2}\).
Let us pause to briefly discuss Boardman's mandala. Given a finite cell complex \(\mathrm{X}\) there are eight ways in which its mod 2 homology and cohomology interact with the Steenrod algebra and its dual. They represent the vertices of the mandala. Boardman identified the relationships between them, which represent the edges. Each edge of the mandala corresponds to a formula. For example, the edge \(\mathrm{D}^{\prime\prime}\) in Figure 1.1 corresponds to the formula (see [B, p. 190])
\[\langle(\mathrm{D}^{\prime\prime}\phi^{\prime}_{\mathrm{L}})(\alpha\otimes \mathsf{f}),\mathsf{x}\rangle=\langle\mathsf{f},\phi^{\prime}_{\mathrm{L}}( \chi(\alpha)\otimes\mathsf{x})\rangle \tag{1.1}\]
that relates the left \(\mathcal{A}\)-module structure on the cohomology \(\mathrm{H}^{*}(\mathrm{X})\) with that of the left \(\mathcal{A}\)-module structure on the homology of \(\mathrm{X}\). However, not all edges of the mandala exist for a general cohomology theory \(\mathrm{E}\) ([B, Section 6]).
When \(\mathrm{H}^{*}(\mathrm{X}):=[\mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}]^ {\star}\) is free and finitely generated over \(\mathbb{M}^{\mathbb{R}}_{2}\), \(\mathrm{H}_{\star}(\mathrm{X})\) is the \(\mathbb{M}^{\mathbb{R}}_{2}\)-linear dual of \(\mathrm{H}^{*}(\mathrm{X})\), as the relevant universal coefficient spectral sequence collapses. Consequently, the work in [B] relates the left action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathrm{H}^{*}(\mathrm{X})\) as well as the left action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathrm{H}_{\star}(\mathrm{X})\), to the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{*}(\mathrm{X})\) (see Proposition 3.1, Proposition 3.3 and Proposition 3.4). These relations are the green dashed edges in Figure 1.1. As a result, one deduces the left \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}_{\star}(\mathrm{X})\) from that of \(\mathrm{H}^{*}(\mathrm{X})\) without resorting to an antiautomorphism (unlike (1.1)).
Our main application is concerned with identifying the \(\mathbb{R}\)-motivic spectra in the class \(\mathcal{A}^{\mathbb{R}}_{1}\) introduced in [BGL]. Each spectrum in \(\mathcal{A}^{\mathbb{R}}_{1}\) is a realization of some \(\mathcal{A}^{\mathbb{R}}\)-module structure on the subalgebra \(\mathcal{A}^{\mathbb{R}}(1):=\mathbb{M}^{\mathbb{R}}_{2}\langle\mathrm{S}q^{1 },\mathrm{S}q^{2}\rangle\subset\mathcal{A}^{\mathbb{R}}\) (see Figure 4.1). In the classical case, Davis and Mahowald [DM] showed that the subalgebra \(\mathcal{A}(1)\) of the Steenrod algebra admits four different left \(\mathcal{A}\)-module structures, of which two are self-dual (see also [BEM, Remark 1.1]). In [BGL], we showed that \(\mathcal{A}^{\mathbb{R}}(1)\) admits 128 different \(\mathcal{A}^{\mathbb{R}}\)-module structures. In this paper, we show:
**Theorem 1.1**.: _Among the 128 different \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\), only 16 are self-dual._
**Remark 1.2**.: In [BGL] we showed that every \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathcal{A}^{\mathbb{R}}(1)\) can be realized as a finite \(\mathbb{R}\)-motivic spectrum, but we do not know if they are unique. Hence, the spectra realizing a self-dual \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathcal{A}^{\mathbb{R}}(1)\) may not be Spanier-Whitehead self-dual.
Davis and Mahowald also showed [DM] that each realization of \(\mathcal{A}(1)\) is the cofiber of a self-map of the spectrum \(\mathcal{Y}:=\mathbb{S}/2\wedge\mathbb{S}/\eta\), where \(\eta\) is the first Hopf element in the stable stems. In the \(\mathbb{R}\)-motivic stable stems, both 2 and \(\mathsf{h}\) in \(\pi_{0,0}(\mathbb{S}_{\mathbb{R}})\) are lifts of \(2\in\pi_{0}(\mathbb{S})\) in the classical stable stems, and \(\eta_{1,1}\in\pi_{1,1}(\mathbb{S}_{\mathbb{R}})\) is the unique lift of \(\eta\) in bidegree \((1,1)\) (up to a unit). This results in two different \(\mathbb{R}\)-motivic lifts of \(\mathcal{Y}\), namely
\[\mathcal{Y}^{\mathbb{R}}_{(2,1)}=\mathbb{S}_{\mathbb{R}}/2\wedge\mathbb{S}_{ \mathbb{R}}/\eta_{1,1}\text{ and }\mathcal{Y}^{\mathbb{R}}_{(\mathsf{h},1)}=\mathbb{S}_{ \mathbb{R}}/\mathsf{h}\wedge\mathbb{S}_{\mathbb{R}}/\eta_{1,1}.\]
We showed in [BGL, Theorem 1.8] that each \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathcal{A}^{\mathbb{R}}(1)\) can be realized as the cofiber of a map between these \(\mathbb{R}\)-motivic lifts of \(\mathcal{Y}\). Here we show:
**Theorem 1.3**.: _Of the self-dual \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\), 8 can be realized as the cofiber of a self-map on \(\mathcal{Y}^{\mathbb{R}}_{(2,1)}\) and 8 as the cofiber of a self-map on \(\mathcal{Y}^{\mathbb{R}}_{(\mathsf{h},1)}\)._
**Notation 1.1**.: In all diagrams depicting modules over the Steenrod algebra, (i.e. in Figure 3.1, Figure 4.1, and Figure 4.2), a dot \(\bullet\) represents a rank one free module over the coefficient ring, black vertical lines indicate the action of \(\mathrm{Sq}^{1}\), blue curved lines indicate the action of \(\mathrm{Sq}^{2}\), and red bracket-like lines represent the action of \(\mathrm{Sq}^{4}\). A label on an edge represents that the operation hits that multiple of the generator. For example, in Figure 3.1, \(\mathrm{Sq}^{2}(\mathsf{x}_{2,1})\) is \(\tau\cdot\mathsf{x}_{4,1}\) and \(\mathrm{Sq}^{4}(\mathsf{x}_{2,1})\) is \(\rho^{2}\cdot\mathsf{x}_{4,1}\).
**Acknowledgements**.: We thank Agnes Beaudry, Mike Hill, Clover May, Sarah Petersen, Liz Tatum, and Doug Ravenel for a stimulating conversation at the conference, Homotopy Theory in honor of Paul Goerss, held at Northwestern University in March 2023. We also thank William Balderrama for an illuminating conversation, and we thank Dan Isaksen for pointing out a typo.
## 2. A review of the \(\mathbb{R}\)-motivic Steenrod algebra and its dual
In [11], Voevodsky defined the motivic Steenrod operations \(\mathrm{Sq}^{n}\), for \(n\geq 0\), and gave a complete description of the \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}\). It is free as a left module over the \(\mathbb{R}\)-motivic homology of a point,
\[\mathbb{M}_{2}^{\mathbb{R}}:=\pi_{\star}^{\mathbb{R}}\mathbf{H}_{\mathbb{R}} \mathbb{F}_{2}\cong\mathbb{F}_{2}[\tau,\rho], \tag{2.1}\]
where the element \(\tau\) is in bidegree \(\star=(0,-1)\), and \(\rho\) is in bidegree \(\star=(-1,-1)\).
The subalgebra \(\mathbb{M}_{2}^{\mathbb{R}}\subset\mathcal{A}^{\mathbb{R}}\) is not central, and therefore \(\mathcal{A}^{\mathbb{R}}\) has two \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structures, one given by left multiplication and the other by right multiplication. The \(\mathbb{R}\)-motivic dual Steenrod algebra \(\mathcal{A}_{\star}^{\mathbb{R}}\) is defined to be the (left) \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear dual of \(\mathcal{A}^{\mathbb{R}}\); it inherits an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structure, which we call the left action. The right \(\mathbb{M}_{2}^{\mathbb{R}}\)-action on \(\mathcal{A}^{\mathbb{R}}\) also induces an action of \(\mathbb{M}_{2}^{\mathbb{R}}\) on \(\mathcal{A}_{\star}^{\mathbb{R}}\), which we call the right action of \(\mathbb{M}_{2}^{\mathbb{R}}\) on \(\mathcal{A}_{\star}^{\mathbb{R}}\) (see [11, p. 48])1. These correspond to the left and the right unit
Footnote 1: Since \(\mathbb{M}_{2}^{\mathbb{R}}\) is commutative, there is no meaningful distinction between “left” and “right” actions. The adjectives are merely a bookkeeping device.
\[\eta_{\mathrm{L}},\eta_{\mathrm{R}}\colon\mathbb{M}_{2}^{\mathbb{R}}\rTo\mathcal{ A}_{\star}^{\mathbb{R}}\]
of the Hopf algebroid \((\mathbb{M}_{2}^{\mathbb{R}},\mathcal{A}_{\star}^{\mathbb{R}})\). Explicitly,
\[\mathcal{A}_{\star}^{\mathbb{R}}\cong\frac{\mathbb{M}_{2}^{\mathbb{R}}[\tau_{ 0},\tau_{1},\tau_{2},\ldots,\xi_{1},\xi_{2},\ldots]}{\tau_{n}^{2}=\tau\xi_{n+1} +\rho\tau_{0}\xi_{n+1}+\rho\tau_{n+1}} \tag{2.2}\]
with \(\eta_{\mathrm{L}}(\rho)=\eta_{\mathrm{R}}(\rho)=\rho\), \(\eta_{\mathrm{L}}(\tau)=\tau\) and \(\eta_{\mathrm{R}}(\tau)=\tau+\rho\tau_{0}\). The comultiplication
(2.3)
is given by
* \(\Delta(\xi_{n})=\sum_{i=0}^{n}\xi_{n-i}^{2^{i}}\otimes\xi_{i}\), and
* \(\Delta(\tau_{n})=\tau_{n}\otimes 1+\sum_{i=0}^{n}\xi_{n-i}^{2^{i}}\otimes\tau_{n-i}\),
for all \(n\in\mathbb{N}\), where \(\xi_{0}\) is the unit \(1\). The conjugation map of the Hopf algebroid structure sends
* \(\mathsf{c}(\rho)=\rho\),
* \(\mathsf{c}(\tau)=\tau+\rho\tau_{0}\),
* \(\mathsf{c}(\xi_{n})=\sum_{i=0}^{n-1}\xi_{n-i}^{2^{i}}\mathsf{c}(\xi_{i})\), and
* \(\mathsf{c}(\tau_{n})=\tau_{n}+\sum_{i=0}^{n-1}\xi_{n-i}^{2^{i}}\mathsf{c}(\tau_{ i})\).
**Remark 2.1**.: The coproduct \(\Delta\) in (2.3) is an \(\mathbb{M}_{2}^{\mathbb{B}}\)-bimodule map.
**Remark 2.2**.: The conjugation is not a map of left \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. In fact, it interchanges the left and right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structures on \(\mathcal{A}_{\star}^{\mathbb{R}}\).
### Kronecker product
The \(\mathbb{R}\)-motivic Kronecker product is a natural pairing between \(\mathbb{R}\)-motivic homology and cohomology which is constructed as follows: If \(\varphi:\mathrm{X}\longrightarrow\Sigma^{\mathrm{i}\cdot\mathrm{j}}\mathbf{H} _{\mathbb{R}}\mathbb{F}_{2}\) represents the class \([\varphi]\in\mathrm{H}^{\star}(\mathrm{X})\) and \(\mathsf{x}:\Sigma^{\mathsf{m},\mathsf{n}}\mathbb{S}_{\mathbb{R}} \longrightarrow\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathrm{X}\) represents \([\mathsf{x}]\in\mathrm{H}_{\mathsf{m},\mathsf{n}}(\mathrm{X})\), then the composition
is the element \(\langle\mathsf{x},\varphi\rangle\in\pi_{\star}(\mathbf{H}_{\mathbb{R}}\mathbb{ F}_{2})\cong\mathbb{M}_{2}^{\mathbb{R}}\).
The Kronecker pairing leads to a homomorphism
(2.4)
where \(\mathsf{n}(\varphi)(\mathsf{x})=\langle\mathsf{x},\varphi\rangle\).
**Remark 2.3**.: When \(\mathrm{H}_{\star}(\mathrm{X})\) is free and finitely generated as an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module, the map \(\mathsf{n}\) in (2.4) is an isomorphism. Consequently, elements in \(\mathrm{H}^{\star}(\mathrm{X})\) can be identified with linear maps from \(\mathrm{H}_{\star}(\mathrm{X})\), and the Kronecker product is simply the evaluation of functionals.
**Notation 2.1**.: Since both \(\mathcal{A}^{\mathbb{R}}\) and \(\mathcal{A}_{\star}^{\mathbb{R}}\) have a left and a right action of \(\mathbb{M}_{2}^{\mathbb{R}}\), let \(\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{\mathbb{R}}}^{\mathrm{left} }\mathcal{A}_{\star}^{\mathbb{R}}\) (likewise \(\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{\mathbb{R}}}^{\mathrm{right} }\mathcal{A}_{\star}^{\mathbb{R}}\)) denote the tensor product of left (likewise right) \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules.
**Remark 2.4**.: When \(\mathrm{X}\) is \(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\), the Kronecker product is a map of left \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules \(\mathcal{A}_{\star}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{\mathbb{R}}}^{\mathrm{ left}}\mathcal{A}^{\mathbb{R}}\to\mathbb{M}_{2}^{\mathbb{R}}\).
### The Milnor basis
The dual Steenrod algebra \(\mathrm{H}_{\star}(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2})\cong\mathcal{A}_{ \star}^{\mathbb{R}}\) is free and degree-wise finitely generated as an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module. Consequently, the natural map of (2.4) gives an isomorphism
\[\mathcal{A}^{\mathbb{R}}\cong\mathrm{Hom}_{\mathbb{M}_{2}^{\mathbb{R}}}( \mathcal{A}_{\star}^{\mathbb{R}},\mathbb{M}_{2}^{\mathbb{R}}) \tag{2.5}\]
of left \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. Taking advantage of the above isomorphism, Voevodsky [V, SS13] defines the Milnor basis of the \(\mathbb{R}\)-motivic Steenrod algebra using the monomial basis of the dual Steenrod algebra (2.2).
For finite sequences \(\mathrm{E}=(\mathsf{e}_{0},\mathsf{e}_{1},\ldots,\mathsf{e}_{m})\) and \(\mathrm{R}=(\mathsf{r}_{1},\ldots,\mathsf{r}_{n})\) of non-negative integers, let \(\mathsf{\rho}(\mathrm{E},\mathrm{R})\) denote the element in \(\mathcal{A}^{\mathbb{R}}\) dual to the monomial
\[\mathsf{\tau}(\mathrm{E})\,\mathsf{\xi}(\mathrm{R}):=\prod_{i\geq 0}\tau_{i}^{ \mathsf{e}_{i}}\prod_{j\geq 1}\xi_{i}^{\mathsf{r}_{i}}\]
in \(\mathcal{A}_{\star}^{\mathbb{R}}\). It is standard practice to set \(\mathcal{P}^{\mathrm{R}}:=\mathsf{\rho}(\mathbf{0},\mathrm{R})\) and \(\mathcal{Q}^{\mathrm{E}}:=\mathsf{\rho}(\mathrm{E},\mathbf{0})\). Moreover, \(\mathcal{Q}_{i}\) is shorthand for the dual to \(\uptau_{i}\).
In Table 2.1, we record, for each monomial \(\uptau(\mathrm{E})\mathcal{E}(\mathrm{R})\in\mathcal{A}_{\star}^{\mathbb{R}}\) in low degree, its image under the conjugation \(\mathsf{c}\) and its dual element in \(\mathcal{A}^{\mathbb{R}}\), both in terms of the Milnor basis as well as in terms of the generators \(\mathcal{G}:=\{\mathrm{Sq}^{2^{k}}:k\geq 1\}\). The latter description will be used in Section 3.3 and Section 4.
A number of these descriptions in terms of \(\mathcal{G}\) can be found in [V]. For example, see [V, Lemma 13.1 and Lemma 13.6]. The Adem relations (see [BGL, Appendix A]) are another useful tool. For example, the Adem relation \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}=\mathrm{Sq}^{6}+\uptau\mathrm{Sq}^{5}\, \mathrm{Sq}^{1}\) leads to the description for \(P^{3}=\mathrm{Sq}^{6}\). The formula for \(\mathcal{P}^{(0,1)}\) follows from [K, (6)]. Finally, the formula for \(\mathcal{P}^{(1,1)}\) can be deduced from expressing \(\mathrm{Sq}^{6}\,\mathrm{Sq}^{2}\) in terms of the Milnor basis. This can be done by evaluating the formula [V, (12.9)]
\[\left\langle\mathsf{x},\varphi\psi\right\rangle=\sum\left\langle\mathsf{x}^{ \prime},\varphi\mathfrak{n}_{\mathrm{R}}\big{(}\big{\langle}\mathsf{x}^{ \prime\prime},\psi\big{)}\big{)}\right\rangle,\qquad\Delta(\mathsf{x})=\sum \mathsf{x}^{\prime}\otimes\mathsf{x}^{\prime\prime}\]
at \(\varphi=\mathrm{Sq}^{6}\), \(\psi=\mathrm{Sq}^{2}\), and \(\mathsf{x}\) monomials in low degree. This shows that \(\mathrm{Sq}^{6}\,\mathrm{Sq}^{2}\) is the sum \(\mathcal{P}^{(1,1)}+\uptau\mathcal{Q}_{0}\mathcal{Q}_{1}\mathcal{P}^{2}\).
\begin{table}
\begin{tabular}{|l|l||l|l|l|} \hline degree & \(\mathsf{x}\in\mathcal{A}_{\star}^{\mathbb{R}}\) & \(\mathsf{c}(\mathsf{x})\) & \(\mathsf{x}^{*}\in\mathcal{A}^{\mathbb{R}}\) & \(\mathsf{x}^{*}\) in terms of \(\mathcal{G}\) \\ \hline \hline \((0,0)\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \hline \((1,0)\) & \(\uptau_{0}\) & \(\uptau_{0}\) & \(\mathcal{Q}_{0}\) & \(\mathrm{Sq}^{1}\) \\ \hline \((2,1)\) & \(\upxi_{1}\) & \(\upxi_{1}\) & \(\mathcal{P}^{1}\) & \(\mathrm{Sq}^{2}\) \\ \hline \((3,1)\) & \(\uptau_{0}\xi_{1}\) & \(\uptau_{0}\xi_{1}\) & \(\mathcal{Q}_{0}\mathcal{P}^{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\) \\ \hline \((3,1)\) & \(\uptau_{1}\) & \(\uptau_{1}+\uptau_{0}\xi_{1}\) & \(\mathcal{Q}_{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}+\mathrm{Sq}^{2}\,\mathrm{Sq}^{1}\) \\ \hline \((4,1)\) & \(\uptau_{0}\uptau_{1}\) & \(\uptau_{0}\uptau_{1}+\uptau\xi_{1}^{2}\) & \(\mathcal{Q}_{0}\mathcal{Q}_{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{1}\) \\ & & \(+\uprho\uptau_{0}\xi_{1}^{2}+\uprho\uptau_{1}\xi_{1}\) & & \\ \hline \((4,2)\) & \(\upxi_{1}^{2}\) & \(\upxi_{1}^{2}\) & \(\mathcal{P}^{2}\) & \(\mathrm{Sq}^{4}\) \\ \hline \((5,2)\) & \(\uptau_{0}\xi_{1}^{2}\) & \(\uptau_{0}\xi_{1}^{2}\) & \(\mathcal{Q}_{0}\mathcal{P}^{2}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}\) \\ \hline \((5,2)\) & \(\uptau_{1}\xi_{1}\) & \(\uptau_{1}\xi_{1}+\uptau_{0}\xi_{1}^{2}\) & \(\mathcal{Q}_{1}\mathcal{P}^{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}+\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}\) \\ \hline \((6,2)\) & \(\uptau_{0}\uptau_{1}\xi_{1}\) & \(\uptau_{0}\tau_{1}\xi_{1}+\uptau\xi_{1}^{3}\) & \(\mathcal{Q}_{0}\mathcal{Q}_{1}\mathcal{P}^{1}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}\) \\ & & \(+\uprho\uptau_{0}\xi_{1}^{3}+\uprho\uptau_{1}\xi_{1}^{2}\) & & \\ \hline \((6,3)\) & \(\upxi_{1}^{3}\) & \(\upxi_{1}^{3}\) & \(\mathcal{P}^{3}\) & \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\uptau\,\mathrm{Sq}^{1}\,\mathrm{Sq}^{4}\, \mathrm{Sq}^{1}\) \\ \hline \((6,3)\) & \(\upxi_{2}\) & \(\upxi_{2}+\upxi_{1}^{3}\) & \(\mathcal{P}^{(0,1)}\) & \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\updelta^{4}\,\mathrm{Sq}^{2}\) \\ \hline \((7,3)\) & \(\uptau_{2}\) & \(\uptau_{2}+\uptau_{1}\xi_{1}^{2}\) & \(\mathcal{Q}_{2}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\mathrm{Sq}^{1}\,\mathrm{Sq}^{ 4}\,\mathrm{Sq}^{2}\) \\ & & \(+\uptau_{0}\xi_{2}+\uptau_{0}\xi_{1}^{3}\) & & \(+\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}+\mathrm{Sq}^{4}\,\mathrm{Sq}^{ 2}\,\mathrm{Sq}^{1}\) \\ \hline \((7,3)\) & \(\uptau_{0}\xi_{1}^{3}\) & \(\uptau_{0}\xi_{1}^{3}\) & \(\mathcal{Q}_{0}\mathcal{P}^{3}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\rho\,\mathrm{Sq}^{1}\,\mathrm{Sq }^{4}\,\mathrm{Sq}^{1}\) \\ \hline \((7,3)\) & \(\uptau_{0}\xi_{2}\) & \(\uptau_{0}\xi_{2}+\uptau_{0}\xi_{1}^{3}\) & \(\mathcal{Q}_{0}\mathcal{P}^{(0,1)}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\mathrm{Sq}^{1}\,\mathrm{Sq}^{ 4}\,\mathrm{Sq}^{2}\) \\ \hline \((7,3)\) & \(\uptau_{1}\xi_{1}^{2}\) & \(\uptau_{1}\xi_{1}^{2}+\uptau_{0}\xi_{1}^{3}\) & \(\mathcal{Q}_{1}\mathcal{P}^{2}\) & \(\mathrm{Sq}^{1}\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}+\rho\,\mathrm{Sq}^{1}\,\mathrm{Sq }^{4}\,\mathrm{Sq}^{1}\) \\ & & & \(+\,\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{1}\) \\ \hline \((8,4)\) & \(\upxi_{1}^{4}\) & \(\upxi_{1}^{4}\) & \(\mathcal{P}^{4}\) & \(\mathrm{Sq}^{8}\) \\ \hline \((8,4)\) & \(\upxi_{1}\xi_{2}\) & \(\upxi_{1}\xi_{2}+\upxi_{1}^{4}\) & \(\mathcal{P}^{(1,1)}\) & \(\mathrm{Sq}^{2}\,\mathrm{Sq}^{4}\,\mathrm{Sq}^{2}+\tau\,\mathrm{Sq}^{1}\,\mathrm{Sq }^{
## 3. Dualizing \(\mathcal{A}^{\mathbb{R}}\)-modules
For any \(\mathbb{R}\)-motivic spectrum \(\mathrm{X}\), its Spanier-Whitehead dual is the function spectrum \(\mathrm{DX}:=\mathrm{F}(\mathrm{X},\mathbb{S}_{\mathbb{R}})\). The goal of this section is to identify the \(\mathcal{A}^{\mathbb{R}}\)-module structure \(\mathrm{H}^{\star}(\mathrm{DX})\) given the \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{X})\) under the following assumption.
**Assumption 3.1**.: Let \(\mathrm{X}\) be a finite \(\mathbb{R}\)-motivic spectrum such that its homology \(\mathrm{H}_{\star}(\mathrm{X})\) is free over \(\mathbb{M}_{2}^{\mathbb{R}}\).
**Notation 3.1**.: For an \(\mathbb{M}_{2}^{\mathbb{R}}\)-module \(\mathbf{N}\) let
\[\mathbf{N}^{\vee}:=\mathrm{Hom}_{\mathbb{M}_{2}^{\mathbb{R}}}(\mathbf{N}, \mathbb{M}_{2}^{\mathbb{R}})\]
be the set of \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear functionals.
### From \(\psi_{\mathrm{L}}\) to \(\phi_{\mathrm{L}}^{\prime}\)
Recall that \(\mathrm{H}^{\star}(\mathrm{X})\) is naturally a left \(\mathcal{A}^{\mathbb{R}}\)-module. We will also use an \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{\star}(\mathrm{X})\)
(3.1)
which can be constructed as follows.
First, note that \(\mathcal{A}^{\mathbb{R}}_{\star}\) is free as a right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module with basis \(\mathcal{B}\) given by the conjugate of any left \(\mathbb{M}_{2}^{\mathbb{R}}\)-module basis. Then we have a splitting
\[\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathbf{H}_{\mathbb{R}}\mathbb{F}_ {2}\simeq\bigvee_{\mathcal{B}}\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\]
as right \(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\)-modules. Define a map of motivic spectra \(\psi\) as the composite
where \(\iota\) is the unit map of \(\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\). For any finite motivic spectrum, the map \(\psi\) induces the map \(\psi_{\mathrm{L}}\) (see [B, Theorem 2.9(b)]) giving \(\mathrm{H}^{\star}(\mathrm{X})\) the structure of an \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule as explained in [B, Section 6]. Further, Boardman showed that:
**Proposition 3.1**.: _[_B_, Lemma 3.4]_ _Let \(\mathbf{N}\) be a left \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule. Then \(\mathbf{N}^{\vee}\) inherits a left \(\mathcal{A}^{\mathbb{R}}\)-module structure_
\[\phi_{\mathrm{L}}\colon\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}_{2}^{ \mathbb{R}}}\mathbf{N}^{\vee}\rTo\]
_via the formula_
\[(\varphi\cdot\uplambda)(n)=(\varphi\otimes\uplambda)\psi_{\mathrm{L}}(n) \tag{3.2}\]
_for \(\varphi\in\mathcal{A}^{\mathbb{R}}\), \(\uplambda\in\mathbf{N}^{\vee}\), and \(n\in\mathbf{N}\)._
**Remark 3.2**.: If \(\psi_{\mathrm{L}}(n)=\sum_{i}a_{i}\otimes n_{i}\), for \(a_{i}\in\mathcal{A}^{\mathbb{R}}_{\star}\) and \(n_{i}\in\mathbf{N}\), then (3.2) can be rewritten as
\[(\varphi\cdot\uplambda)(n)=\sum_{i}\varphi\Big{(}a_{i}\eta_{\mathrm{R}}\big{(} \uplambda(n_{i})\big{)}\Big{)}. \tag{3.3}\]
Combining Proposition 3.1 with the following result, one can deduce the left \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{DX})\) (\(\phi_{\mathrm{L}}^{\prime}\) in Figure 1.1) from the left \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{\star}(\mathrm{X})\) (\(\psi_{\mathrm{L}}\) in Figure 1.1).
**Proposition 3.2**.: _Suppose \(\mathrm{X}\) satisfies Assumption 3.1. There are isomorphisms of left \(\mathcal{A}^{\mathbb{R}}\)-modules \(\mathrm{H}^{\star}(\mathrm{DX})\cong(\mathrm{H}_{\star}(\mathrm{DX}))^{\vee} \cong(\mathrm{H}^{\star}(\mathrm{X}))^{\vee}\)._
Proof.: Under Assumption 3.1 the map \(\mathfrak{n}:\mathrm{H}^{\star}(\mathrm{DX})\longrightarrow(\mathrm{H}_{\star}( \mathrm{DX}))^{\vee}\) defined in (2.4), is not just an isomorphism of \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules (see Remark 2.3), but also an isomorphism of left \(\mathcal{A}^{\mathbb{R}}\)-modules according to [B, Lemma 6.2].
For the second isomorphism, first note that Assumption 3.1 implies that there exists an isomorphism
\[\mathrm{H}_{\star}(\mathrm{DX})\cong\mathrm{H}^{\star}(\mathrm{X}) \tag{3.4}\]
of \(\mathbb{M}_{2}^{\mathbb{R}}\)-modules. By Proposition 3.1, it is enough to lift (3.4) to an isomorphism of \(\mathcal{A}_{\star}^{\mathbb{R}}\)-comodules. To this end, we first observe that the comodule structure on \(\mathrm{H}^{\star}(\mathrm{X})\) is induced by the map
\[\mathrm{F}(\mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2})\cong\mathrm{F}( \mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathbb{S}_{\mathbb{R}}) \xrightarrow[]{\mathrm{F}(\mathrm{X},\mathrm{id}\wedge\mathfrak{t})} \mathrm{F}(\mathrm{X},\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\wedge\mathbf{H}_{ \mathbb{R}}\mathbb{F}_{2}).\]
(see (3.1) or [B, Theorem 5.4])). The result then follows from the commutativity of the diagram
where the horizontal maps are evaluation at \(\mathrm{X}\).
### From \(\phi_{\mathrm{L}}\) to \(\psi_{\mathrm{L}}\)
For any \(\varphi\in\mathcal{A}^{\mathbb{R}}\cong\mathrm{Hom}_{\mathbb{M}_{2}^{\mathbb{R }}}(\mathcal{A}_{\star}^{\mathbb{R}},\mathbb{M}_{2}^{\mathbb{R}})\), let \(\varphi\mathbf{c}\) denote the composition
\[\varphi\mathbf{c}:\mathcal{A}_{\star}^{\mathbb{R}}\xrightarrow[]{\mathfrak{c} }\mathcal{A}_{\star}^{\mathbb{R}}\xrightarrow[]{\varphi}\mathbb{M}_{2}^{ \mathbb{R}},\]
which is a right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module map as the conjugation \(\mathsf{c}\) is an isomorphism from the right \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structure to the left \(\mathbb{M}_{2}^{\mathbb{R}}\)-module structure of \(\mathcal{A}_{\star}^{\mathbb{R}}\).
**Proposition 3.3**.: _Let \(\mathbf{N}\) be a left \(\mathcal{A}_{\star}^{\mathbb{R}}\)-comodule with coproduct \(\psi_{\mathrm{L}}\). Then, for \(n\in\mathbf{N}\) and \(\varphi\in\mathcal{A}^{\mathbb{R}}\), the formula_
\[\varphi\cdot n=(\varphi\mathbf{c}\otimes\mathrm{id})\psi_{\mathrm{L}}(n)\]
_defines a left \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathbf{N}\)._
Proof.: Using the coassociativity of the coaction, the statement reduces to checking that
\[(\varphi\psi)(\mathsf{c}(a))=\sum\varphi\Big{(}\mathsf{c}\big{(}\eta_{ \mathrm{L}}(\psi(\mathsf{c}(a^{\prime}_{i})))a^{\prime\prime}_{i}\big{)}\Big{)}, \tag{3.5}\]
for \(\varphi,\psi\in\mathcal{A}^{\mathbb{R}}\) and \(a\in\mathcal{A}_{\star}^{\mathbb{R}}\). The formula (3.5) follows from combining [B, Lemma 3.3(a)] with \(\mathsf{c}\circ\eta_{\mathrm{L}}=\eta_{\mathrm{R}}\) and
\[\Delta(\mathsf{c}(a))=\sum_{i}\mathsf{c}(a^{\prime\prime}_{i})\otimes\mathsf{ c}(a^{\prime}_{i})\]
whenever \(\Delta(a)=\sum_{i}a^{\prime}_{i}\otimes a^{\prime\prime}_{i}\).
**Remark 3.3**.: The right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure on \(\mathcal{A}^{\mathbb{R}}_{\star}\) is defined [V, Section 12] such that
\[a\cdot\mathfrak{n}_{\mathrm{R}}(m)(\varphi)=a(\varphi\cdot m)\]
for \(m\in\mathbb{M}^{\mathbb{R}}_{2}\), \(a\in\mathcal{A}^{\mathbb{R}}_{\star}\) and \(\varphi\in\mathcal{A}^{\mathbb{R}}\). This shows that the evaluation pairing defines a map
of \(\mathbb{M}^{\mathbb{R}}_{2}\)-bimodules, where the left \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure on \(\mathcal{A}^{\mathbb{R}}\otimes_{\mathbb{M}^{\mathbb{R}}_{2}}^{\mathrm{right}} \mathcal{A}^{\mathbb{R}}_{\star}\) is obtained via the left action on \(\mathcal{A}^{\mathbb{R}}\), and the right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure via the left action on \(\mathcal{A}^{\mathbb{R}}_{\star}\). Consequently, the left action constructed in Proposition 3.3 can be described as the composition \(\upphi_{\mathrm{L}}\) in the diagram
Note that while \(\mathsf{c}\) is not a right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module map, the composition
is a map of \(\mathbb{M}^{\mathbb{R}}_{2}\)-bimodules.
If we set \(\mathbf{N}=\mathrm{H}^{\star}(\mathrm{X})\), i.e. the cohomology of a finite spectrum \(\mathrm{X}\) with the \(\mathcal{A}_{\star}\)-comodule structure of (3.1), Proposition 3.3 recovers the usual \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{X})\) (see [B, Lemma 6.3]). Our next result reverse-engineers Proposition 3.3 to obtain a formula that calculates the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule on \(\mathrm{H}^{\star}(\mathrm{X})\) (\(\uppsi_{\mathrm{L}}\) in Figure 1.1) from the \(\mathcal{A}^{\mathbb{R}}\)-module on \(\mathrm{H}^{\star}(\mathrm{X})\) (\(\upphi_{\mathrm{L}}\) in Figure 1.1).
Let \(\mathcal{B}\) be the monomial basis of the left \(\mathbb{M}^{\mathbb{R}}_{2}\)-module structure on \(\mathcal{A}^{\mathbb{R}}_{\star}\) (as in Section 2.2). For simplicity, let \(\mathbf{b}_{i}\) denote the elements of \(\mathcal{B}\), and let \(\mathbf{B}^{i}\in\mathcal{A}^{\mathbb{R}}\) be the dual basis in the following result.
**Proposition 3.4**.: _Let \(\mathbf{N}\) be a left \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule with coaction map \(\uppsi_{\mathrm{L}}\). Then \(\uppsi_{\mathrm{L}}\) is related to \(\upphi_{\mathrm{L}}\) using the formula_
\[\uppsi_{\mathrm{L}}(n)=\sum_{i}c(\mathbf{b}_{i})\otimes(\mathbf{B}^{i}\cdot n),\]
_where \(\cdot\) is the action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathbf{N}\) constructed using Proposition 3.3._
Proof.: Since \(\{c(\mathbf{b}_{i})\}\) is a basis for \(\mathcal{A}^{\mathbb{R}}_{\star}\) as a free right \(\mathbb{M}^{\mathbb{R}}_{2}\)-module, it follows that there is a unique expression \(\uppsi_{\mathrm{L}}(n)=\sum_{i}c(\mathbf{b}_{i})\otimes n_{i}\) for appropriate elements \(n_{i}\)
On the other hand,
\[\mathbf{B}^{k}\cdot n = (\mathbf{B}^{k}\mathsf{c}\otimes\mathrm{id})\mathsf{\psi}_{\mathrm{L} }(n)\] \[= \sum_{i}\mathbf{B}^{k}\mathsf{c}(\mathsf{c}(\mathbf{b}_{i}))\otimes n _{i}\] \[= \sum_{i}\mathbf{B}^{k}(\mathbf{b}_{i})\otimes n_{i}\] \[= n_{k}\]
by Proposition 3.3.
### Preliminary examples
We now demonstrate the usefulness of Proposition 3.1, Proposition 3.3, and Proposition 3.4 by identifying the \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathrm{DX})\), for a few well-known finite \(\mathbb{R}\)-motivic finite complexes \(\mathrm{X}\).
**Notation 3.2**.: In the following examples, the \(\mathbb{R}\)-motivic spectrum \(\mathrm{X}\) will satisfy Assumption 3.1. In particular, \(\mathrm{H}^{\star}(\mathrm{X})\) will be a free \(\mathbb{M}^{\mathbb{R}}_{2}\)-module. By \(\mathsf{x}_{\mathrm{i},\mathrm{j}}\), we will denote an element of its \(\mathbb{M}^{\mathbb{R}}_{2}\)-basis which lives in cohomological bidegree \((\mathrm{i},\mathrm{j})\). By \(\hat{\mathsf{x}}_{\mathrm{i},\mathrm{j}}\), we will denote an element of \((\mathrm{H}^{\star}(\mathrm{X}))^{\vee}\) dual to \(\mathsf{x}_{\mathrm{i},\mathrm{j}}\). Note that the bidegree of \(\hat{\mathsf{x}}_{\mathrm{i},\mathrm{j}}\) is \((-\mathrm{i},-\mathrm{j})\) under the isomorphism \((\mathrm{H}^{\star}(\mathrm{X}))^{\vee}\cong\mathrm{H}^{\star}(\mathrm{DX})\).
**Example 3.1** (The \(\mathbb{R}\)-motivic mod \(2\) Moore spectrum).: As an \(\mathbb{M}^{\mathbb{R}}_{2}\)-module, \(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\) has generators \(\mathsf{x}_{0,0}\) and \(\mathsf{x}_{1,0}\). The \(\mathcal{A}^{\mathbb{R}}\)-module structure is then determined by the relations
\[\mathrm{Sq}^{1}(\mathsf{x}_{0,0})=\mathsf{x}_{1,0},\ \mathrm{Sq}^{2}( \mathsf{x}_{0,0})=\rho\mathsf{x}_{1,0}.\]
By Proposition 3.4, we get
\[\mathsf{\psi}_{\mathrm{L}}(\mathsf{x}_{1,1})=1\otimes\mathsf{x}_{1,1},\ \mathsf{\psi}_{\mathrm{L}}(\mathsf{x}_{0,0})=1\otimes\mathsf{x}_{0,0}+\tau_{0} \otimes\mathsf{x}_{1,0}+\rho\xi_{1}\otimes\mathsf{x}_{1,0},\]
which determines the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure on \(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\). Then we apply Proposition 3.1, in particular (3.3), to obtain
\[\mathrm{Sq}^{1}(\hat{\mathsf{x}}_{1,0})=\hat{\mathsf{x}}_{0,0},\ \mathrm{Sq}^{2}( \hat{\mathsf{x}}_{1,0})=\rho\hat{\mathsf{x}}_{0,0},\]
which shows \(\left(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\right)^{\vee}\cong\Sigma^ {-1}\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/2)\) as \(\mathcal{A}^{\mathbb{R}}\)-modules. This aligns with the fact that \(D(\mathbb{S}_{\mathbb{R}}/2)\) is equivalent to \(\Sigma^{-1}\mathbb{S}_{\mathbb{R}}/2\).
**Example 3.2** (\(\mathbb{R}\)-motivic mod \(\mathsf{h}\) Moore spectrum).: As a graded \(\mathbb{M}^{\mathbb{R}}_{2}\)-module, \(\mathrm{H}^{\star}(\mathbb{S}/\mathsf{h})\) is isomorphic to \(\mathrm{H}^{\star}(\mathbb{S}/2)\). However, they differ in their \(\mathcal{A}^{\mathbb{R}}\)-module structures in that
\[\mathrm{Sq}^{1}(\mathsf{x}_{0,0})=\mathsf{x}_{1,0},\ \mathrm{Sq}^{2}(\mathsf{x}_{0,0 })=0\]
determines the \(\mathcal{A}^{\mathbb{R}}\)-module structure on \(\mathrm{H}^{\star}(\mathbb{S}/\mathsf{h})\). By Proposition 3.4
\[\mathsf{\psi}_{\mathrm{L}}(\mathsf{x}_{1,1})=1\otimes\mathsf{x}_{1,1},\ \mathsf{\psi}_{ \mathrm{L}}(\mathsf{x}_{0,0})=1\otimes\mathsf{x}_{0,0}+\tau_{0}\otimes\mathsf{x }_{1,0},\]
and using (3.3) we see that \(\left(\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\right)^{\vee} \cong\Sigma^{-1}\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\). This aligns with the fact that \(D(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\) is equivalent to \(\Sigma^{-1}\mathbb{S}_{\mathbb{R}}/\mathsf{h}\).
**Example 3.3**.: (The \(\mathbb{R}\)-motivic \(\mathfrak{Joker}\)) The \(\mathcal{A}^{\mathbb{R}}(1)\)-module of the \(\mathbb{R}\)-motivic \(\mathfrak{Joker}\)\(\mathcal{J}_{\mathbb{R}}\) (discussed in [GL]) is the quotient \(\mathcal{A}^{\mathbb{R}}(1)/\operatorname{Sq}^{3}\). In Figure 3.1, we have displayed a particular \(\mathcal{A}^{\mathbb{R}}\)-module extension of \(\mathcal{A}^{\mathbb{R}}(1)/\operatorname{Sq}^{3}\) obtained using Theorem 4.1. Using Proposition 3.4, in conjunction with Table 2.1, we notice that
\[\begin{array}{rcl}\vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{4,2})&=&1 \otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{3,1})&=&1\otimes\mathsf{x}_{3,1}+ \uptau_{0}\otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{2,1})&=&1\otimes\mathsf{x}_{2,1}+ (\tau\upxi_{1}+\rho\uptau_{0}\upxi_{1}+\rho\uptau_{1}+\rho^{2}\upxi_{1}^{2} )\otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{1,0})&=&1\otimes\mathsf{x}_{1,0}+ \upxi_{1}\otimes\mathsf{x}_{3,1}+\uptau_{1}\otimes\mathsf{x}_{4,2}\\ \vspace{0.2cm}\Psi_{\mathrm{L}}(\mathsf{x}_{0,0})&=&1\otimes\mathsf{x}_{0,0}+ \uptau_{0}\otimes\mathsf{x}_{1,0}+\upxi_{1}\otimes\mathsf{x}_{2,1}+(\uptau_{ 0}\upxi_{1}+\uptau_{1})\otimes\mathsf{x}_{3,1}\\ &&+(\uptau_{0}\uptau_{1}+\rho^{2}\upxi_{2}+\rho^{2}\upxi_{1}^{3})\otimes \mathsf{x}_{4,2}\end{array}\]
determines the \(\mathcal{A}_{\star}^{\mathbb{R}}\)-comodule structure of \(\mathrm{H}^{\star}(\mathcal{J}_{\mathbb{R}})\). Then (3.3) produces the \(\mathcal{A}^{\mathbb{R}}\)-module structure on the dual displayed in Figure 3.1.
## 4. Self-dual \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\)
Let \(\mathsf{x}_{\mathrm{i,j}}\) and \(\mathsf{y}_{\mathrm{i,j}}\) denote the elements of the \(\mathbb{M}_{2}^{\mathbb{R}}\)-basis of \(\mathcal{A}^{\mathbb{R}}(1)\) introduced in [BGL, Notation 1.5] in bidegree (i,j).
**Theorem 4.1**.: _[_BGL_, Theorem 1.6]_ _For every vector_
\[\overline{\uptau}=(\alpha_{03},\beta_{03},\beta_{14},\beta_{06},\beta_{25}, \beta_{26},\gamma_{36})\in\mathbb{F}_{2}^{7},\]
_there exists a unique isomorphism class of \(\mathcal{A}^{\mathbb{R}}\)-module structures on \(\mathcal{A}^{\mathbb{R}}(1)\), which we denote by \(\mathcal{A}^{\mathbb{R}}_{\overline{\uptau}}(1)\), determined by the formulas_
\[\begin{array}{rcl}\operatorname{Sq}^{4}(\mathsf{x}_{0,0})&=&\beta_{03}( \rho\cdot\mathsf{y}_{3,1})+(1+\beta_{03}+\beta_{14})(\tau\cdot\mathsf{y}_{4,1 })+\alpha_{03}(\rho\cdot\mathsf{x}_{3,1})\\ \operatorname{Sq}^{4}(\mathsf{x}_{1,0})&=&\mathsf{y}_{5,2}+\beta_{14}(\rho \cdot\mathsf{y}_{4,1})\\ \operatorname{Sq}^{4}(\mathsf{x}_{2,1})&=&\beta_{26}(\tau\cdot\mathsf{y}_{6,2 })+\beta_{25}(\rho\cdot\mathsf{y}_{5,2})+j_{24}(\rho^{2}\cdot\mathsf{y}_{4,1 })\\ \operatorname{Sq}^{4}(\mathsf{x}_{3,1})&=&(\beta_{25}+\beta_{26})(\rho\cdot \mathsf{y}_{6,2})\\ \operatorname{Sq}^{4}(\mathsf{y}_{3,1})&=&\gamma_{36}(\rho\cdot\mathsf{y}_{6,2 })\\ \operatorname{Sq}^{8}(\mathsf{x}_{0,0})&=&\beta_{06}(\rho^{2}\cdot\mathsf{y}_{6,2 }),\end{array}\]
_where \(j_{24}=\beta_{03}\gamma_{36}+\alpha_{03}(\beta_{25}+\beta_{26}).\) Further, any \(\mathcal{A}^{\mathbb{R}}\)-module whose underlying \(\mathcal{A}^{\mathbb{R}}(1)\)-module is free on one generator is isomorphic to one listed above._
Using Proposition 3.4, we calculate the \(\mathcal{A}^{\mathbb{R}}_{\star}\)-comodule structure \(\psi_{\mathrm{L}}\) on \(\mathcal{A}^{\mathbb{R}}_{\mathrm{\varphi}}(1)\):
\[\psi_{\mathrm{L}}(\mathsf{y}_{6,2}) = 1\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{y}_{5,2}) = 1\otimes\mathsf{y}_{5,2}+\tau_{0}\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{y}_{4,1}) = 1\otimes\mathsf{y}_{4,1}+\xi_{1}\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{y}_{3,1}) = 1\otimes\mathsf{y}_{3,1}+\tau_{0}\otimes\mathsf{y}_{4,1}+(\tau_ {1}+\tau_{0}\xi_{1}+\gamma_{36}\rho\xi_{1}^{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{3,1}) = 1\otimes\mathsf{x}_{3,1}+\xi_{1}\otimes\mathsf{y}_{5,2}+(\tau_ {1}+(\beta_{25}+\beta_{26})\rho\xi_{1}^{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{2,1}) = 1\otimes\mathsf{x}_{2,1}+\tau_{0}\otimes\mathsf{x}_{3,1}+(\tau \xi_{1}+\rho\tau_{1}+\rho\tau_{0}\xi_{1}+\dot{j}_{24}\rho^{2}\xi_{1}^{2}) \otimes\mathsf{y}_{4,1}\] \[+(\tau_{1}+\tau_{0}\xi_{1}+\beta_{25}\rho\xi_{1}^{2})\otimes \mathsf{y}_{5,2}+(\tau_{0}\tau_{1}+(1+\beta_{26})\tau\xi_{1}^{2})\otimes \mathsf{y}_{6,2}\] \[+((1+\beta_{25})\rho\tau_{0}\xi_{1}^{2}+\rho\tau_{1}\xi_{1}+\dot{ j}_{24}\rho^{2}\xi_{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{1,0}) = 1\otimes\mathsf{x}_{1,0}+\xi_{1}\otimes\mathsf{y}_{3,1}+(\tau_{1 }+\beta_{14}\rho\xi_{1}^{2})\otimes\mathsf{y}_{4,1}+\xi_{1}^{2}\otimes\mathsf{ y}_{5,2}\] \[+(\tau_{1}\xi_{1}+\gamma_{36}\rho\xi_{1}^{3}+(\beta_{14}+\gamma_{ 36})\rho\xi_{2})\otimes\mathsf{y}_{6,2}\] \[\psi_{\mathrm{L}}(\mathsf{x}_{0,0}) = 1\otimes\mathsf{x}_{0,0}+\tau_{0}\otimes\mathsf{x}_{1,0}+\xi_{1 }\otimes\mathsf{x}_{2,1}+(\tau_{1}+\alpha_{03}\rho\xi_{1}^{2})\otimes\mathsf{ x}_{3,1}\] \[+(\tau_{1}+\tau_{0}\xi_{1}+\beta_{03}\rho\xi_{1}^{2})\otimes \mathsf{y}_{3,1}\] \[+(\tau_{0}\tau_{1}+(\beta_{03}+\beta_{14})\tau\xi_{1}^{2}+(\beta_ {03})\rho\tau_{0}\xi_{1}^{2}+\dot{j}_{24}\rho^{2}\xi_{2}+\dot{j}_{24}\rho^{2} \xi_{1}^{3})\otimes\mathsf{y}_{4,1}\] \[+(\tau_{1}\xi_{1}+\tau_{0}\xi_{1}^{2}+\beta_{25}\rho\xi_{1}^{3}+( \alpha_{03}+\beta_{25})\rho\xi_{2})\otimes\mathsf{y}_{5,2}\] \[+(\beta_{26}\tau\xi_{1}^{3}+(\beta_{26}+\gamma_{36})\rho\tau_{0} \xi_{1}^{3}+(\beta_{25}+\beta_{26}+\gamma_{36})\rho\tau_{1}\xi_{1}^{2})\otimes \mathsf{y}_{6,2}\] \[+((1+\beta_{03}+\beta_{14}+\beta_{26})\tau\xi_{2}+(1+\beta_{03}+ \beta_{26}+\gamma_{36})\rho\tau_{0}\xi_{2})\otimes\mathsf{y}_{6,2}\] \[+((1+\alpha_{03}+\beta_{03}+\beta_{25}+\beta_{26}+\gamma_{36})\rho \tau_{2}+\dot{j}_{24}\rho^{2}\xi_{1}\xi_{2})\otimes\mathsf{y}_{6,2}\] \[+(\tau_{0}\tau_{1}\xi_{1}+(\dot{j}_{24}+\beta_{06})\rho^{2}\xi_{1}^ {4})\otimes\mathsf{y}_{6,2}.\]
Using (3.3), we get the following result, where \(\hat{\mathsf{x}}_{\mathrm{i,j}}\) and \(\hat{\mathsf{y}}_{\mathrm{i,j}}\) are the elements in \((\mathcal{A}^{\mathbb{R}}_{\mathrm{\varphi}}(1))^{\vee}\) dual to \(\mathsf{x}_{\mathrm{i,j}}\) and \(\mathsf{y}_{\mathrm{i,j}}\), respectively.
Figure 4.1. A singly-generated free \(\mathcal{A}^{\mathbb{R}}(1)\)-module (on the left), and its dual (on the right).
**Theorem 4.2**.: _The \(\mathcal{A}^{\mathbb{R}}(1)\)-module structure on the dual \((\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1))^{\vee}\) is as displayed in the right of Figure 4.1. Moreover, its \(\mathcal{A}^{\mathbb{R}}\)-module structure is determined by_
\[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{6,2}) = (\beta_{25}+\beta_{26})(\rho\cdot\hat{\mathsf{x}}_{3,1})+(1+\beta_ {26})(\tau\cdot\hat{\mathsf{x}}_{2,1})+\gamma_{36}(\rho\cdot\hat{\mathsf{y}}_{3,1})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{5,2}) = \hat{\mathsf{x}}_{1,0}+\beta_{25}(\rho\cdot\hat{\mathsf{x}}_{2,1})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{4,1}) = (\beta_{03}+\beta_{14})(\tau\cdot\hat{\mathsf{x}}_{0,0})+\beta_{14 }(\rho\cdot\hat{\mathsf{x}}_{1,0})+\underline{\jmath}_{24}(\rho^{2}\cdot\hat{ \mathsf{x}}_{2,1})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{y}}_{3,1}) = \beta_{03}(\rho\cdot\hat{\mathsf{x}}_{0,0})\] \[\mathrm{Sq}^{4}(\hat{\mathsf{x}}_{3,1}) = \alpha_{03}(\rho\cdot\hat{\mathsf{x}}_{0,0})\] \[\mathrm{Sq}^{8}(\hat{\mathsf{y}}_{6,2}) = (\underline{\jmath}_{24}+\beta_{06})(\rho^{2}\cdot\hat{\mathsf{x} }_{0,0}).\]
**Corollary 4.1**.: _For the \(\mathcal{A}^{\mathbb{R}}\)-module \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\), its (regraded) dual is isomorphic to_
\[\Sigma^{6,2}(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1))^{\vee}\cong \mathcal{A}^{\mathbb{R}}_{\delta(\overline{\nu})}(1),\]
_where \(\delta(\overline{\nu})=(\gamma_{36},\beta_{25}+\beta_{26},\beta_{25},\underline {\jmath}_{24}+\beta_{06},\beta_{14},\beta_{03}+\beta_{14},\alpha_{03}).\) Thus, \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) is self dual if and only if_
1. \(\alpha_{03}=\gamma_{36}\)_,_
2. \(\beta_{03}=\beta_{25}+\beta_{26}\)_, and_
3. \(\beta_{14}=\beta_{25}\)_._
**Remark 4.3**.: The constant \(\underline{\jmath}_{24}\) has a geometric significance noted in [1, Remark 1.21]. It follows from Corollary 4.1 that \(\underline{\jmath}_{24}=0\) whenever \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) is self-dual.
**Remark 4.4**.: The underlying classical \(\mathcal{A}\)-module structure on \(\mathcal{A}(1)\) is self-dual if and only if \(\beta_{26}=\beta_{03}+\beta_{14}\). In the presence of (3), this is equivalent to (2). Thus the conditions of Corollary 4.1 can be thought of as the classical condition, plus conditions (1) and (3).
In [1], we showed that the \(\mathcal{A}^{\mathbb{R}}\)-modules \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) can be realized as the cohomology of an \(\mathbb{R}\)-motivic spectrum for all values of \(\overline{\nu}\).
**Corollary 4.2**.: _Suppose \(\mathcal{A}^{\mathbb{R}}_{1}[\overline{\nu}]\) is an \(\mathbb{R}\)-motivic spectrum realizing \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\), and suppose that \(\mathcal{A}^{\mathbb{R}}_{\overline{\nu}}(1)\) is a self-dual \(\mathcal{A}^{\mathbb{R}}\)-module. Then \(\mathcal{A}^{\mathbb{R}}_{1}[\overline{\nu}]\) is the cofiber of a \(v_{1}\)-self-map on either \(\mathcal{Y}^{\mathbb{R}}_{2,1}\) or \(\mathcal{Y}^{\mathbb{R}}_{h,1}\)._
Proof.: By [1, Theorem 1.8], the \(\mathbb{R}\)-motivic spectrum \(\mathcal{A}^{\mathbb{R}}_{1}[\overline{\nu}]\) is the cofiber of a \(v_{1}\)-self map on \(\mathcal{Y}^{\mathbb{R}}_{2,1}\) if \(\beta_{25}+\beta_{26}+\gamma_{36}=1\) and \(\alpha_{03}+\beta_{03}=1\), whereas it is the cofiber of a \(v_{1}\)-self-map on \(\mathcal{Y}^{\mathbb{R}}_{h,1}\) if \(\beta_{25}+\beta_{26}+\gamma_{36}=0\) and \(\alpha_{03}+\beta_{03}=0\). But conditions (1) and (2) of Corollary 4.1 imply that \(\beta_{25}+\beta_{26}+\gamma_{36}\) is equal to \(\alpha_{03}+\beta_{03}\).
Our main results Theorem 1.1 and Theorem 1.3 follows from Corollary 4.1 and Corollary 4.2 respectively.
**Remark 4.5**.: Using the Betti realization functor, [1] produced \(\mathrm{C}_{2}\)-equivariant realizations of analogous \(\mathcal{A}^{\mathrm{C}_{2}}\)-modules \(\mathcal{A}^{\mathrm{C}_{2}}_{\overline{\nu}}(1)\). Using the comparison result [1, Theorem 1.19], the \(\mathcal{A}\)-module structures on \(\Phi(\mathcal{A}^{\mathrm{C}_{2}}_{1}[\overline{\nu}])\), the geometric fixed points of \(\mathcal{A}^{\mathrm{C}_{2}}_{1}[\overline{\nu}]\), was identified in [1, Figure 4.12]. In Figure 4.2, we record the \(\mathcal{A}\)-module structure on the geometric fixed points of a self-dual \(\mathcal{A}^{\mathrm{C}_{2}}_{1}[\overline{\nu}]\).
## Appendix A On the antiautomorphism of \(\mathcal{A}^{\mathbb{R}}\)
Although Boardman [2, SS6] pointed out that the set of E-cohomology operations \([\mathrm{E},\mathrm{E}]^{*}\) may not necessarily have an antiautomorphism for a cohomology theory \(\mathrm{E}\), we find the case of \(\mathrm{E}=\mathbf{H}_{\mathbb{R}}\mathbb{F}_{2}\) a rather curious one.
The case of \(\mathrm{E}=\mathbf{H}\mathbb{F}_{2}\) is exceptional; the Steenrod algebra \(\mathcal{A}:=[\mathbf{H}\mathbb{F}_{2},\mathbf{H}\mathbb{F}_{2}]_{*}\) is well-known to be a Hopf algebra and, therefore, equipped with an antiautomorphism \(\chi:\mathcal{A}\xrightarrow{\ \ }\mathcal{A}\). The composition of extension of scalars and Betti realization,
induces maps of Steenrod algebras
where \(\pi_{1}\) sends \(\rho\) to \(0\) and \(\pi_{2}\) sends \(\tau\) to \(1\).
The antiautomorphism \(\chi\) of the classical Steenrod algebra is known to lift along \(\pi_{2}\),
as the \(\mathbb{C}\)-motivic Steenrod algebra is a connected bialgebra. However, lifting \(\chi^{\mathbb{C}}\) along \(\pi_{1}\) is less straightforward. The dual \(\mathbb{R}\)-motivic Steenrod algebra \(\mathcal{A}^{\mathbb{R}}_{\star}\) is a Hopf _algebroid_, rather than a Hopf algebra, so that its dual is not a Hopf algebra.
One feature that distinguishes \(\mathcal{A}^{\mathbb{R}}\) from \(\mathcal{A}^{\mathbb{C}}\) is the fact that \(\tau\) is not central in \(\mathcal{A}^{\mathbb{R}}\). In the following result, we use the commutators \([\tau,\mathrm{Sq}^{2^{n}}]\) in \(\mathcal{A}^{\mathbb{R}}\) (computed using the Cartan formula [2, Proposition 9.7]) to compute the values of a hypothetical antiautomorphism in low degrees.
**Proposition A.1**.: _Suppose that \(\chi^{\mathbb{R}}\colon\mathcal{A}^{\mathbb{R}}\longrightarrow\mathcal{A}^{ \mathbb{R}}\) is a ring antihomomorphism and an involution. Then_
\[\chi^{\mathbb{R}}(\tau) = \tau\] \[\chi^{\mathbb{R}}(\rho) = \rho\] \[\chi^{\mathbb{R}}(\operatorname{Sq}^{1}) = \operatorname{Sq}^{1}\] \[\chi^{\mathbb{R}}(\operatorname{Sq}^{2}) = \operatorname{Sq}^{2}+\rho\operatorname{Sq}^{1}\] \[\chi^{\mathbb{R}}(\operatorname{Sq}^{4}) = \operatorname{Sq}^{4}+\rho\operatorname{Sq}^{2}\operatorname{Sq}^{ 1}+\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}\operatorname{Sq}^{1}\,.\]
Proof.: If \(\chi^{\mathbb{R}}\) is a ring antihomomorphism then
(A.1) \[\chi^{\mathbb{R}}[r,s]=[\chi^{\mathbb{R}}r,\chi^{\mathbb{R}}s]\]
in characteristic \(2\). Since \(\tau\) and \(\operatorname{Sq}^{1}\) are unique \(\mathbb{F}_{2}\)-generators in their bidegree and \(\chi^{\mathbb{R}}\) is an automorphism, it follows that
\[\chi^{\mathbb{R}}(\tau)=\tau\qquad\text{and}\qquad\chi^{\mathbb{R}}( \operatorname{Sq}^{1})=\operatorname{Sq}^{1}\,.\]
For degree reasons, \(\chi^{\mathbb{R}}(\operatorname{Sq}^{2})\) must be \(\operatorname{Sq}^{2}+\varepsilon\rho\operatorname{Sq}^{1}\), where \(\varepsilon\) is either \(0\) or \(1\). But the commutator \([\tau,\operatorname{Sq}^{2}]\) is equal to \(\rho\tau\operatorname{Sq}^{1}\). Applying (A.1), we see that
\[\chi^{\mathbb{R}}(\rho\tau\operatorname{Sq}^{1}) = [\chi^{\mathbb{R}}(\tau),\chi^{\mathbb{R}}(\operatorname{Sq}^{2})]\] \[\Rightarrow \operatorname{Sq}^{1}\tau\rho = [\tau,\operatorname{Sq}^{2}+\varepsilon\rho\operatorname{Sq}^{1}]\] \[\Rightarrow \rho\tau\operatorname{Sq}^{1}+\rho^{2} = \rho\tau\operatorname{Sq}^{1}+\varepsilon\rho^{2},\]
and therefore, \(\varepsilon\) must be \(1\).
Similarly, degree considerations imply that \(\chi^{\mathbb{R}}(\operatorname{Sq}^{4})\) must be of the form \(\operatorname{Sq}^{4}+\delta\rho\operatorname{Sq}^{1}\operatorname{Sq}^{2}+ \varepsilon\rho\operatorname{Sq}^{2}\operatorname{Sq}^{1}+\lambda\tau \operatorname{Sq}^{1}\operatorname{Sq}^{2}\operatorname{Sq}^{1}\). The commutator \([\tau,\operatorname{Sq}^{4}]\) is \(\rho\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}\), so we conclude that
\[[\chi^{\mathbb{R}}\tau,\chi^{\mathbb{R}}\operatorname{Sq}^{4}] = [\tau,\operatorname{Sq}^{4}+\delta\rho\operatorname{Sq}^{1} \operatorname{Sq}^{2}+\varepsilon\rho\operatorname{Sq}^{2}\operatorname{Sq}^{1} +\lambda\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}\operatorname{Sq}^{1}]\] \[= (1+\lambda)\rho\tau\operatorname{Sq}^{1}\operatorname{Sq}^{2}+ \lambda\rho\tau\operatorname{Sq}^{2}\operatorname{Sq}^{1}+(\delta+\varepsilon) \rho^{2}\operatorname{Sq}^{2}+\delta\rho^{3}\operatorname{Sq}^{1}\]
must agree with
\[\chi^{\mathbb{R}}(\rho\tau\operatorname{Sq}^{1}\operatorname{Sq}^{ 2}) = (\operatorname{Sq}^{2}+\rho\operatorname{Sq}^{1})\operatorname{Sq}^{ 1}\tau\rho\] \[= \rho\tau\operatorname{Sq}^{2}\operatorname{Sq}^{1}+\rho^{2} \operatorname{Sq}^{2},\]
and therefore, \(\delta=0\), \(\varepsilon=1\), and \(\lambda=1\) as desired.
Proposition A.1 suggests there might be an \(\mathbb{R}\)-motivic antiautomorphism on the subalgebra \(\mathcal{A}^{\mathbb{R}}(2):=\mathbb{M}^{\mathbb{R}}_{2}\langle\operatorname{Sq }^{1},\operatorname{Sq}^{2},\operatorname{Sq}^{4}\rangle\subset\mathcal{A}^{ \mathbb{R}}\). It seems likely that the method above can be extended to produce an antiautomorphism on all of \(\mathcal{A}^{\mathbb{R}}\). However, we leave open the question of whether or not this is possible.
On the other hand, the following remark shows that an antihomomorphism on \(\mathcal{A}^{\mathbb{R}}\) may not be directly of use in dualizing \(\mathcal{A}^{\mathbb{R}}\)-modules.
**Remark A.1**.: Note that if \(\mathbf{N}\) is an \(\mathcal{A}^{\mathbb{R}}\)-module, then the action of \(\mathcal{A}^{\mathbb{R}}\) on \(\mathbf{N}\) is not \(\mathbb{M}^{\mathbb{R}}_{2}\)-linear, so that, in contrast to the classical case, it does not induce a right \(\mathcal{A}^{\mathbb{R}}\)-action on the dual \(\mathbf{N}^{\vee}\). Even if \(\mathcal{A}^{\mathbb{R}}\) were to be hypothetically equipped with an antiautomorphism \(\chi^{\mathbb{R}}\), this may not be so useful for the purpose of dualization. The reason is that the classical formula (1.1) does not work in this setting. More
precisely, let \(\mathbf{N}\) be an \(\mathcal{A}^{\mathbb{R}}\)-module, let \(\lambda\in\mathbf{N}^{\vee}\), \(\varphi\in\mathcal{A}^{\mathbb{R}}\), and \(n\in\mathbf{N}\). Then defining a new action \(\varphi\odot\lambda\) by
\[(\varphi\odot\lambda)(n)=\lambda(\chi^{\mathbb{R}}\varphi\cdot n)\]
does not produce an \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear function. For instance, consider the case \(\mathbf{N}=\mathrm{H}^{\star}(\mathbb{S}_{\mathbb{R}}/\mathsf{h})\) from Example 3.2. Then \((\mathrm{Sq}^{2}\odot\hat{\mathbf{x}}_{1,0})(\tau\chi_{0,0})\) vanishes, whereas \((\mathrm{Sq}^{2}\odot\hat{\mathbf{x}}_{1,0})(\mathbf{x}_{0,0})\) is equal to \(\rho\). It follows that the formula for \(\mathrm{Sq}^{2}\odot\hat{\mathbf{x}}_{1,0}\) is not \(\mathbb{M}_{2}^{\mathbb{R}}\)-linear and is therefore not a valid element of \(\mathbf{N}^{\vee}\).
|
2309.06351 | Chemically inspired Erdős-Rényi oriented hypergraphs | High-order structures have been recognised as suitable models for systems
going beyond the binary relationships for which graph models are appropriate.
Despite their importance and surge in research on these structures, their
random cases have been only recently become subjects of interest. One of these
high-order structures is the oriented hypergraph, which relates couples of
subsets of an arbitrary number of vertices. Here we develop the
Erd\H{o}s-R\'enyi model for oriented hypergraphs, which corresponds to the
random realisation of oriented hyperedges of the complete oriented hypergraph.
A particular feature of random oriented hypergraphs is that the ratio between
their expected number of oriented hyperedges and their expected degree or size
is 3/2 for large number of vertices. We highlight the suitability of oriented
hypergraphs for modelling large collections of chemical reactions and the
importance of random oriented hypergraphs to analyse the unfolding of
chemistry. | Angel Garcia-Chung, Marisol Bermúdez-Montaña, Peter F. Stadler, Jürgen Jost, Guillermo Restrepo | 2023-09-12T16:16:25Z | http://arxiv.org/abs/2309.06351v1 | # Chemically inspired Erdos-Renyi oriented hypergraphs
###### Abstract
High-order structures have been recognised as suitable models for systems going beyond the binary relationships for which graph models are appropriate. Despite their importance and surge in research on these structures, their random cases have been only recently become subjects of interest. One of these high-order structures is the oriented hypergraph, which relates couples of subsets of an arbitrary number of vertices. Here we develop the Erdos-Renyi model for oriented hypergraphs, which corresponds to the random realisation of oriented hyperedges of the complete oriented hypergraph. A particular feature of random oriented hypergraphs is that the ratio between their expected number of oriented hyperedges and their expected degree or size is \(3/2\) for large number of vertices. We highlight the suitability of oriented hypergraphs for modelling large collections of chemical reactions and the importance of random oriented hypergraphs to analyse the unfolding of chemistry. Keywords: graphs, hypergraphs, chemical space, random model, Erdos-Renyi.
## 1 Introduction
Graphs are often selected as the mathematical structure to analyse binary relationships between objects of a system [1, 2]. As in any mathematical setting, it is important to
determine the bounds of the structures modelling the systems. This allows for determining how far or close a system is from its theoretical extremes, which leads to study the lower and upper bounds of systems as well as its random cases. In graph theory, this amounts to determine the edge-less, complete and random graphs. The edge-less graph corresponds to a system devoid of relationships, while the complete graph to a system depicting the maximum number of relationships. In turn, the random graph corresponds to a system whose relationships are randomly assigned.
For the sake of clarity and for setting up the notation, a _graph_\(G=(V,E)\) corresponds to a set of _vertices_\(V\) and a set of _edges_\(E\) (\(E\subseteq\{\{x,y\}:x,y\in V\}\)). Note that we consider simple graphs, that is, graphs without multiple edges and without loops. Thus, \(E\) is a set, but not a multiset, and if \(\{x,y\}\in E\), it follows that \(x\neq y\). While the edge-less and complete graphs are straightforwardly defined, the former as a graph \(G\) with an empty set \(E\) and the second as one with \(n(n-1)/2\) edges, the random graph allows for several approximations.
The earliest and most general random graph model was reported by Erdos and Renyi in 1959 [3] and is constructed by the random assignment of edges on the set \(V\), with probability of assignment \(p\). That is, given a set \(V\) of \(n\) vertices, the random graph corresponds to the realisation, or not, of every possible edge on \(V\). The probability of realisation of the edges is given by \(p\). This can be thought of as the result of an algorithm that takes every possible couple of vertices in \(V\) and decides whether to link them or not. The decision depends on generating a random number, and if that number happens to be less than a predetermined value, denoted as \(p\), then the vertices are connected; otherwise, they remain disconnected.
Despite the widespread use of graphs, it has been acknowledged that certain systems exhibit relationships that surpass the typical binary relations represented by graphs [4, 5]. These high-order relations refer to \(k\)-ary relationships, which involve interactions among \(k\) or less vertices, where \(k>2\). Examples of these systems include functional and structural brain networks [6, 7], protein interaction networks [8], chemical reaction networks [9, 10, 11, 12], as well as semantic [13, 14] and co-authorship networks [15]. For instance, in co-authorship networks, a high-order relationship, say of order five, corresponds to the five authors of a five-author paper.
Mathematical settings to address high-order relations include simplicial complexes and hypergraphs [16]. The former are selected for cases where nested sets of related vertices are relevant and the latter for cases where arbitrary sets of vertices are the focus. Hence, hypergraphs are more general than simplicial complexes and have been the subject of recent studies upon their properties and applications [17, 18, 19, 20, 21, 22, 23].
One of the applications of hypergraphs is for modelling chemical reactions, which becomes the leading example and motivation of the current paper. In this chemical setting, hypergraphs require to encode binary relationships among sets of vertices (substances). Despite the use of hypergraphs in chemistry, the extreme cases of chemical hypergraphs have not been yet studied. Here we report extreme cases of chemical hypergraphs. That is, the mathematics of edge-less, complete and random chemical hypergraphs, as well as some of their properties. Before discussing these structures, we provide some details on their use for modelling chemical reactions.
## 2 Graphs and hypergraphs for modelling the chemical space
Chemical space spans all substances and reactions reported over the history of chemistry [24, 9] and has been initially modelled with graphs [25, 26]. In this setting, reactions in
Figure 1a are modelled as graphs as shown in Figure 1b.1 The model encodes the binary relationship between an educt and a product of a reaction. Although informative, the graph model misses important chemical information. For instance, whether a substance can be reached from another in a given chemical space. This is the case of A and E (in Figure 1b), which are connected via two edges: {A, F} and {F, E}. Nevertheless, Figure 1a shows that from A, E cannot be obtained. This shortcoming is solved by adding more structure to the graph, namely by adding direction to the edges and by modelling reactions as _directed graphs_ (Figure 1c). This model now shows that E cannot be reached from A, but that G can be reached from A. Despite the improvements in incorporating the direction of the chemical transformation, the directed graph model still lacks other important chemical aspects, namely that the model does not inform which substances react to produce other chemicals. This shortcoming is solved by modelling reactions as _hypergraphs_. Figure 1d shows the hypergraph for the reactions of Figure 1a, which encodes the different sets of substances in the reactions of the chemical space analysed. Those sets correspond to actual mixtures of substances either before (educts) or after the reaction has taken place (products). From Figure 1d is clear that A is found as mixed with B, as well as E with D and A; and D with C. Likewise, the hypergraph shows that F and G are substances whose transformations do not require any other substance of the chemical space.
Footnote 1: Models of the chemical space strongly emphasise the role of reactions as the “gluing” aspect relating substances, which, in turn, endows the set of substances with a structure. This is what actually turn the set of substances into a space [9]. In such a setting, however, non-connected substances, often arising from chemical extractions, play an important role in the space, as they represent new non-synthetic chemicals, which may, or not, remain disconnected in the space, or which require a certain amount of time to be connected to the network. This was the case of the noble gases, for instance, which, for many years, remained as isolated substances of the chemical space. Moreover, determining the average time required to connect a substance to the network of the chemical space is of central importance for studies on the evolution of chemical knowledge, as well for the chemical industry [27].
As chemical reactions are actually directed binary relationships among sets of substances, that is among educts and products, a further refinement of the model requires introducing this binary relation, which is encoded by _directed hypergraphs_. Figure 1e illustrates how the directed hypergraph encodes the transformation of the set of educts {A, B} into the products {C, D}, as well as the reaction of {A,D,E} to produce substance F. Likewise, it shows the rearrangement of F into G.
Alternative representations of the directed hypergraph of Figure 1e are shown in Figures 1f and g. Figure 1f, besides emphasising the directed relationship among educts and products, highlights the role of substances (vertices) in the transformation. Figure 1g maps the directed hypergraph back to the realm of directed graphs.2 This time not to the directed graphs of Figure 1c but rather to _directed bipartite graphs_, where besides the usual vertices representing substances, a new set of vertices is introduced, namely that representing reactions. A relaxed version of structures in Figures 1e, f and g are the corresponding undirected structures, shown in Figures 1h, i and j, which are different representations of the associated _oriented hypergraph_. Note how oriented hypergraphs constitute a suitable model for reversible reactions. Thus, for instance, the oriented hypergraph {A, B}-{C, D} encodes the reactions A + B \(\rightarrow\) C + D, as well as C + D \(\rightarrow\) A + B.3 This encoding is chemically sound, as every reaction is intrinsically reversible. The actual direction observed in wet-lab experiments arises from the energetic conditions in which molecules are embedded in the reaction process.4
It is upon the oriented hypergraph model for chemical reactions that we study its extreme cases and develop an Erdos-Renyi random model. In the next section the mathematical elements setting up the stage for such study are presented.
## 3 Chemically inspired oriented hypergraphs
As discussed in the previous section, the most informative model for the chemical space is the directed hypergraph. Nevertheless, for the sake of generality, we report in the current paper results for extreme cases and an Erdos-Renyi model for oriented hypergraphs. That is, in what follows, we regard the chemical space as devoid of direction in its chemical reactions and we focus only on the connectivity of the substances via reactions, while preserving the important aspect of chemical reactions of relating sets of substances.
We introduce some definitions, which assume a fixed number \(n\) of vertices gathered on the set \(V\). Upon \(V\), subsets of vertices are defined, which are pair-wise related by the oriented hypergraph. These subsets gather together substances appearing as either educs or products of reactions in the chemical space. Chemical reactions can be classified as either catalytic or non-catalytic. The former involve the use of a catalyst, which is a substance added to the educs to speed up the synthesis of reaction products. Catalysts are not consumed in the reaction, which distinguishes them form the educs. Chemical notation encodes the catalyst as a label of the reaction. If, for instance, A
Figure 1: Chemical reactions as graphs and hypergraphs. All structures, from b to j correspond to (hyper)graph models for the chemical reactions in a, which constitute a chemical space of seven substances and three reactions.
+ B \(\rightarrow\) C + D is catalysed by E, the reaction is written down as A + B \(\xrightarrow{E}\) C + D. Otherwise, if there is no catalyst involved, A + B \(\rightarrow\) C + D represents the reaction. In this classification, autocatalytic reactions constitute a particular case of catalytic reactions, where at least one of the educts acts as a catalyst. Hence, A + B \(\rightarrow\) B + C is an example of an autocatalytic reaction,5 which can be considered as the sequence of two reactions: first A + B \(\rightarrow\) Z, followed by Z \(\rightarrow\) B + C, where Z is known as the reaction intermediate. Hence, oriented hypergraphs turn to be suitable models for all chemical reactions. Therefore, we model the chemical space as discussed in Definition 1.
Footnote 5: Note that stoichiometric coefficients are disregarded in this notation.
**Definition 1**.: _A chemical space of \(n\) substances gathered in \(V\) is modelled as an oriented hypergraph \(G=(V,E)\), with oriented hyperedges (reactions) gathered in \(E\subseteq\{\{X,Y\}:X,Y\in\mathcal{P}(V)\setminus\{\varnothing\}\text{ and }X\cap Y=\varnothing\}\). \(X\) and \(Y\), which are sets of substances, are called hypervertices of the chemical space and every oriented hyperedge \(r\in E\) is called a chemical reaction of the chemical space._
Importantly, in our framework, substances consumed or produced in a chemical reaction are restricted to be in the set \(V\). Moreover, hypervertices cannot be empty (Definition 1) because there is no chemical reaction leading or starting from an empty set of substances. Likewise, a reaction cannot start from the complete set of substances, as there would be no room for synthesising a new substance. Similarly, as no reaction can lead to the set containing all substances, the hypervertex containing all vertices is disregarded from the model.6
Footnote 6: Moreover, in this setting we are disregarding the particular reaction conditions at which reactions are carried out. Nonetheless, they can be incorporated as labels of oriented hyperedges.
Therefore, the maximum number of hypervertices in a chemical space is given by \(2^{n}-2\). They correspond to the maximum number of combinations of substances chemists can try, which may lead to another set of substances within the given chemical space.7 Now we classify those sets of substances by the number of substances they contain.
Footnote 7: This upper bound holds significance in, for instance, research on the origin of life. A mathematical setting for such studies is provided by Dittrich’s chemical organisation theory [29], where finding sequences of reactions involving a given subset of substances of the chemical space is an important aspect of the approach.
Let \(V\) be the set of \(n\) vertices (substances) and \(B_{a}\) the set gathering together subsets of \(V\) of \(a\) vertices. Thus, \(B_{1}\) collects all substances (\(B_{1}=V\)), \(B_{2}\) all couples of substances and so forth. The complete set of hypervertices \(B\) is given by
\[B=\bigcup_{a=1}^{n-1}B_{a}=\mathcal{P}(V)\setminus\{\varnothing,B_{n}\}. \tag{1}\]
The number of hypervertices of size \(a\) corresponds to the cardinality of \(B_{a}\), which is given by the number of combinations of \(a\) vertices that can be obtained out of \(n\) vertices. Thus,
\[|B_{a}|=\mathcal{C}_{a}^{n}=\binom{n}{a}. \tag{2}\]
As \(B\) contains all possible sets of vertices (hypervertices of the oriented hypergraph) involved in chemical reactions for a given set of vertices, a suitable object gathering information on the connectivity of these hypervertices, that is on the chemical reactions, is the generalised _adjacency matrix of the oriented hypergraph_\(\mathbf{M}=[M_{i,j}]_{2^{n}-2\times 2^{n}-2}\), where the indices \(i,j=1,2,\ldots,2^{n}-2\) run over all the possible hypervertices for a given
\(n\). The components of the adjacency matrix are given as
\[M_{i,j}=\left\{\begin{array}{ll}1&\mbox{if $r=\{b_{i},b_{j}\}\in E$,}\\ 0&\mbox{otherwise.}\end{array}\right. \tag{3}\]
Thus, any 1-entry of \(\mathbf{M}\) corresponds to a chemical reaction between the hypervertex \(b_{i}\) that gathers \(i\) substances and the hypervertex \(b_{j}\) that gathers \(j\) substances. Note that \(\mathbf{M}\) is symmetric (\(M_{i,j}=M_{j,i}\)) because the reactions \(b_{i}\to b_{j}\) and \(b_{j}\to b_{i}\) are equivalent in the oriented hypergraph. Any 0-entry in \(\mathbf{M}\) indicates either that the reaction is possible but not yet realised in the chemical space or that the reaction is not possible. In the first case, the two hypervertices \(b_{i}\) and \(b_{j}\) may be connected by a chemical reaction, but the chemical space at disposal has not realised the reaction. In the second case, there is at least a common substance between \(b_{i}\) and \(b_{j}\) and the reaction is not possible in our scheme.
Let us consider a toy-chemical space of four reactions over the set of substances \(V=\{\)A, B, C, D\(\}\) (Figure 2), whose corresponding generalised matrix is shown below.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} & A & B & C & D & AB & AC & AD & BC & BD & CD & ABC & ABD & ACD & BCD \\ \hline A & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline B & **1** & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline C & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline D & 0 & 0 & 0 & 0 & 0 & **1** & 0 & **1** & 0 & 0 & 0 & 0 & 0 \\ \hline AB & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline AC & 0 & 0 & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline AD & 0 & 0 & 0 & 0 & 0 & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline BC & 0 & 0 & 0 & **1** & 0 & 0 & **1** & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline BD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline CD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline ABC & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline ABD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline ACD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline BCD & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Adjacency matrix \(\mathbf{M}\) for a chemical space of four substances \(\{\)A, B, C, D\(\}\). 1-entries correspond to realised reactions, while 0-entries to either possible reactions (black) or to impossible reactions (red). This latter correspond to reactions with at least a common substance in the set of educts and products. Matrix blocks, separated by bold lines, gather together sets of educts with cardinality \(i\) and sets of products with cardinality \(j\).
Figure 2: Toy chemical space constituted by four substances \(\{\)A,B,C,D\(\}\) and four reactions \(r_{i}\). On the left, reactions are presented in chemical notation and on the right the chemical space is depicted as an oriented hypergraph.
Note, for instance, that the reaction A \(\rightarrow\) B or B \(\rightarrow\) A is part of the chemical space gathered in \(\mathbf{M}\) as \(M_{\text{A,B}}=M_{\text{B,A}}=1\) (Table 1). In contrast, A \(\rightarrow\) C or C \(\rightarrow\) A, although a possible reaction, has a 0-entry in \(\mathbf{M}\) because it is not a part of the chemical space (Figure 2). Reaction A \(\rightarrow\) AB or AB \(\rightarrow\) A, which correspond to A \(\rightarrow\) A + B or A + B \(\rightarrow\) A, in chemical notation, is not possible because of the commonality of A in both hypervertices, therefore it is shown as a 0-entry in \(\mathbf{M}\). We distinguish two kinds of 0-entries in \(\mathbf{M}\). Those in black font correspond to possible but not realised reactions. Those in red to impossible reactions because of commonality of at least a substance between educts and products.8
Footnote 8: The number of black 0-entries amounts to the unrealised chemical reactions, which together with the 1-entries correspond to the potential chemical space, as called by some philosophers of chemistry [30].
**M** can be arranged by the number of vertices belonging to the hypervertices. That is, column and rows in \(\mathbf{M}\) can be arranged from \(B_{1}\), \(B_{2}\), \(\ldots\) until \(B_{n-1}\). This is the scheme we adopted to present \(\mathbf{M}\) above (Table 1). The number of vertices of the hypervertices allows for classifying reactions in terms of their size. Given a reaction \(r=\{b_{i},b_{j}\}\) between hypervertices \(b_{i}\) and \(b_{j}\), the size of \(r\) is given by \(s(r)=i+j\). That is, the _size of a reaction_ corresponds to the number of substances involved in the reaction.9 Thus, \(s(r_{1})=2\), \(s(r_{2})=s(r_{3})=3\) and \(s(r_{4})=4\) for the chemical space of Figure 2. Reaction size is bounded by \(2\leq s(r)\leq n\) as a reaction must involve at least an educt and a product and the largest reaction must involve no more than \(n\), the number of substances of the chemical space.
Footnote 9: The size of a reaction corresponds to the molecularity of the reaction [31] if the stoichiometric coefficients of the reaction are regarded. As this is not, in general, the case in studies on the chemical space [24, 9], the size of a reaction may be regarded as a proto-molecularity of the reaction. It only accounts for the number of different chemicals reported in the reaction, but not for their actual figures. Often, chemists omit writing, for instance, water or carbon dioxide, as those substances can be inferred from the context of the reaction or because of the tradition to emphasise the target product of a reaction, namely of a chemical synthesis [27].
Based on the size of reactions, the size of the chemical space, encoded in \(G\) (Definition 1), can be introduced.
**Definition 2**.: _The size\(s(G)\) of an oriented hypergraph \(G\), whose oriented hyperedges are gathered in \(E\), is given by_
\[s(G)=\sum_{r\in E}s(r). \tag{4}\]
Note how the chemical space of four substances in Figure 2 has \(s(G)=12\). As we discuss below, \(s(G)\) becomes a proxy for the connectivity of the chemical space, which is straightforward approached through the degree of a vertex (substance).
Following the definition of vertex degree in graph theory [2], the degree of a vertex \(v\in V\) (\(d(v)\)) of an oriented hypergraph \(G\) corresponds to the number of oriented hyperedges (reactions) in which the substance participates. For the oriented hypergraph of Figure 2, \(d(\text{A})=d(\text{B})=d(\text{C})=d(\text{D})=3\). Likewise, the degree of an oriented hypergraph \(d(G)\) can be defined.
**Definition 3**.: _The degree\(d(G)\) of an oriented hypergraph \(G\), of vertices gathered in \(V\), is given by_
\[d(G)=\sum_{v\in V}d(v). \tag{5}\]
Thus, \(d(G)=12\) for the oriented hypergraph of Figure 2. There is an interesting relation between size and degree of a chemical space modelled as an oriented hypergraph.
**Lemma 1**.: _The size of an oriented hypergraph \(G\) and its degree are equal. That is_
\[s(G)=d(G). \tag{6}\]
Proof.: Given an oriented hypergraph \(G=(V,E)\) made of \(n\) vertices (substances) gathered in \(V\) and of \(u\) reactions (oriented hyperedges) gathered in \(E\), \(G\) is equivalent to a bipartite graph whose vertices correspond to both substances \((v)\) and reactions \((r)\). As the degree sum formula for a bipartite graph is [32]
\[d(G)=\sum_{v\in V}d(v)=\sum_{r\in E}d(r)=u, \tag{7}\]
then \(d(G)=u\) and as \(s(r)=d(r)\), then
\[\sum_{r\in E}d(r)=\sum_{r\in E}s(r)=s(G)=u. \tag{8}\]
Thus, size and degree of the oriented hypergraph \(G\) modelling the chemical space indicate how tight or dense the chemical space is. Low size or degree values indicate a sparse chemical space, while high values a strongly connected space. In order to provide a baseline for comparison of sizes and degrees of chemical spaces, upper and lower bounds of those oriented hypergraph sizes and degrees need to be determined.10 This, in turn, requires determining the maximum and minimum number of reactions a given set \(V\) of \(n\) substances may hold.11 We call the _complete oriented hypergraph_ over \(V\) the oriented hypergraph holding the maximum number of reactions over the \(n\) substances gathered in \(V\).
Footnote 10: Bounds for size and degree of oriented hypergraphs are provided in Lemma 7.
Footnote 11: See Lemma 8.
Given that the adjacency matrix \(\mathbf{M}\) can be arranged according to the size of its hypervertices, it can be conveniently written as
\[\mathbf{M}=\left(\begin{array}{cccc}\mathbf{M}_{1,1}&\mathbf{M}_{1,2}& \cdots&\mathbf{M}_{1,n-1}\\ \mathbf{M}_{2,1}&\mathbf{M}_{2,2}&\cdots&\mathbf{M}_{2,n-1}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{M}_{n-1,1}&\mathbf{M}_{n-1,2}&\cdots&\mathbf{M}_{n-1,n-1}\end{array} \right), \tag{9}\]
where \(\mathbf{M}_{i,j}\) indicates the block of \(\mathbf{M}\) containing information on the relationship between hypervertices of size \(i\) and of size \(j\).
**Lemma 2**.: _The blocks \(\mathbf{M}_{i,j}\) with \(i+j>n\) are null blocks._
Proof.: As reactions with size \(i+j>n\) necessarily have a common substance, out of the \(n\) substances, then those reactions gathered in the blocks \(\mathbf{M}_{i,j}\) with \(i+j>n\) are impossible. Therefore, for blocks \(\mathbf{M}_{i,j}\) with \(i+j>n\), their \(\mathbf{M}\)-entries are \(0\)-entries.
The effect of Lemma 2 upon the possible number of reactions of a given chemical space is enormous, as it shows that the disjoint condition for hypervertices belonging to reactions reduces to a large extent the actual possibilities for exploring new chemical combinations. Recall that these reactions correspond to red \(0\)-entries in the matrix \(\mathbf{M}\) shown in Table 1, where it is seen the effect upon a chemical space of only four substances. Figure 3 shows the proportion of impossible reactions for larger spaces. Equation 17 below quantifies how rapid impossible reactions grow as a function of \(n\).
In order to determine the number of reactions of the complete oriented hypergraph, we analyse the number of reactions in each block \(\mathbf{M}_{i,j}\) of the adjacency matrix.
**Lemma 3**.: _The number of oriented hyperedges for the block matrix \(\mathbf{M}_{i,j}\) in a complete oriented hypergraph is given by_
\[u_{i,j}(n)=\left\{\begin{array}{ll}\mathcal{C}_{i}^{n}\mathcal{C}_{j}^{n-i}&i \neq j\\ \frac{1}{2}\mathcal{C}_{i}^{n}\mathcal{C}_{i}^{n-i}&i=j\end{array}\right. \tag{10}\]
_where \(\mathcal{C}_{i}^{n}\) is given in Equation 2._
Proof.: If \(i\neq j\), the number of oriented hyperedges \(u_{i,j}(n)\) corresponds to the number of 1-entries in the block matrix \(\mathbf{M}_{i,j}\), that is \(u_{i,j}(n)=\sum_{i,j}\mathbf{M}_{i,j}\). Moreover, as \(\mathbf{M}\) is symmetric, \(u_{i,j}(n)=u_{j,i}(n)\). The same symmetry argument indicates that if \(i=j\), the number of oriented hyperedges is half the number of 1-entries for the block matrix \(\mathbf{M}_{i,j}\). As noted, it is important to calculate the number of 1-entries for a block matrix \(\mathbf{M}_{i,j}\). Let \([\mathbf{M}_{i,j}]\) be the matrix components of the block matrix \(\mathbf{M}_{i,j}\). According to Equation 2, the maximum number of rows of this block is \(\mathcal{C}_{i}^{n}\). Given that vertices are indistinguishable, the number of 1-entries is the same for each row. Hence, the number of 1-entries for \(\mathbf{M}_{i,j}\) is given by \(\mathcal{C}_{i}^{n}\) multiplied by the number of 1-entries per row. As every 1-entry represents a chemical reaction, therefore, the intersection between the hypervertex of the row \(i\) and the hypervertex of the column \(j\) must be empty. Thus, the number of vertices available for the hypervertex of the column \(j\) is given by \(n-i\). Thus, the number of non-zero components on each column is given by \(\mathcal{C}_{j}^{n-i}\), which is the number of combinations with \(j\) vertices out of \(n-i\) vertices. By replacing this result in the previous relations, we obtain
\[u_{i,j}(n)=\mathcal{C}_{i}^{n}\,\mathcal{C}_{j}^{n-i},\]
which is the total number of non-zero components for the block matrix \(\mathbf{M}_{i,j}\) when \(i\neq j\). If \(i=j\), we have to consider half the amount of 1-entries, that is to say
\[u_{i,i}(n)=\frac{1}{2}\mathcal{C}_{i}^{n}\,\mathcal{C}_{i}^{n-i}.\]
Knowing the number of reactions in every block of \(\mathbf{M}\), we can determine the number of reactions (oriented hyperedges) of a given size in the complete oriented hypergraph.
Figure 3: Amount of possible and impossible reactions. Visual depiction of adjacency matrices \(\mathbf{M}\) for chemical spaces of a) \(n=4\), b) \(n=7\) and c) \(n=10\) substances (vertices). Possible reactions (black entries) correspond to oriented hyperedges, where the related hypervertices (sets of substances) are disjoint. Impossible reactions (red entries) are the oriented hyperedges relating non-disjoint sets of substances.
**Lemma 4**.: _The number of oriented hyperedges of size \(s\) in a complete oriented hypergraph is given by_
\[u_{s}(n)=(2^{s-1}-1)\mathcal{C}_{s}^{n} \tag{11}\]
Proof.: Oriented hyperedges with size \(s\) are located within block matrices \(\mathbf{M}_{i,j}\) such that \(i+j=s\). Hence, the block matrices satisfying this condition are \(\{\mathbf{M}_{1,s-1},\mathbf{M}_{2,s-2},\ldots,\mathbf{M}_{s-1,1}\}\). The number of oriented hyperedges of each of these block matrices is given by the Lemma 3. Therefore, the number of oriented hyperedges with size \(s\) is given by
\[u_{s}(n)=\frac{1}{2}\sum_{i=1}^{s-1}n\,u_{i,s-i}=\frac{1}{2}\sum_{i=1}^{s-1} \mathcal{C}_{i}^{n}\mathcal{C}_{s-i}^{n-i}=\frac{1}{2}\mathcal{C}_{s}^{n}\sum _{i=1}^{s-1}\mathcal{C}_{i}^{s}=\left(2^{s-1}-1\right)\mathcal{C}_{s}^{n}\]
Thus, for the chemical space of the adjacency matrix shown in Table 1, \(u_{2}(4)=6\), \(u_{3}(4)=12\) and \(u_{4}(4)=7\).
Now, we can determine the number of reactions in which a substance can participate in a complete oriented hypergraph.
**Lemma 5**.: _Given a complete oriented hypergraph, the number of oriented hyperedges in which a vertex participates in the block matrix \(\mathbf{M}_{i,j}\) is_
\[u_{i,j}(n)=\left\{\begin{array}{cc}\frac{(i+j)}{n}\mathcal{C}_{i+j}^{n} \mathcal{C}_{j}^{i+j}&i\neq j\\ \frac{i}{n}\mathcal{C}_{2}^{n}\mathcal{C}_{i}^{2^{2}}&i=j\end{array}\right. \tag{12}\]
Proof.: Let us consider an arbitrary block matrix \(\mathbf{M}_{i,j}\) and an arbitrary vertex \(v_{1}\). This block matrix can be split into two blocks, one in which the vertex \(v_{1}\) appears in the hypervertex describing the rows of the block and the other block is when the substance appears in the column hypervertex (the intersection is obviously excluded). The number of (row) hypervertices in which the substance \(v_{1}\) appears is given by \(\mathcal{C}_{i-1}^{n-1}\), which is the number of combinations with \(i-1\) vertices out of \(n-1\) vertices. On the other hand, for the same block, the number of (columns) hypervertices in which \(v_{1}\) is not present is given by \(\mathcal{C}_{j}^{n-i}\), which is the number of combinations with \(j\) vertices out of \(n-i\) vertices. Therefore, the total number of oriented hyperedges in which \(v_{1}\) appears in the hypervertex \(b_{i}\) is given by \(\mathcal{C}_{i-1}^{n-1}\mathcal{C}_{j}^{n-i}\).
Let us now consider the second block within the same block matrix \(\mathbf{M}_{i,j}\). In this case the number of (column) hypervertices in which \(v_{1}\) is contained is given by \(\mathcal{C}_{j-1}^{n-1}\), which, similarly to the previous case, is the number of combinations with \(n-1\) substances out of \(j-1\) vertices. On the other hand, and still in the second block, the number of (row) hypervertices in which \(v_{1}\) is not present is given by \(\mathcal{C}_{i}^{n-j}\), which corresponds to the number of combinations of \(n-j\) substances out of \(i\) substances. Therefore, the total number of oriented hyperedges in which \(v_{1}\) appears in the hyperedge \(b_{j}\) is given by \(\mathcal{C}_{i}^{n-j}\mathcal{C}_{j-1}^{n-1}\).
Combining these results, we have that the number of oriented hyperedges in which \(v_{1}\) can appear in the block matrix \(M_{i,j}\), when \(i\neq j\), is given by
\[u_{i,j}(n)=\mathcal{C}_{i-1}^{n-1}\mathcal{C}_{j}^{n-i}+\mathcal{C}_{i}^{n-j} \mathcal{C}_{j-1}^{n-1}=\frac{(i+j)}{n}\,\mathcal{C}_{i+j}^{n}\,\mathcal{C}_ {j}^{i+j}\]
and when \(i=j\) we have half the number of oriented hyperedges, that is
\[u_{i,i}(n)=\frac{i}{n}\,\mathcal{C}_{2i}^{n}\,\mathcal{C}_{i}^{2i}\]
The above remarks allow for determining the number of reactions in the complete oriented hypergraph in which a substance can participate (Lemma 6), as well as the number of reactions of a complete oriented hypergraph (Lemma 8).
**Lemma 6**.: _The number of oriented hyperedges in which an arbitrary vertex can belong in a complete oriented hypergraph is given by_
\[u(n)=3^{n-1}-2^{n-1}. \tag{13}\]
Proof.: By considering the result of Lemma 3, we obtain
\[u(n) =\frac{1}{2}\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}u_{i,j}=\frac{1}{2} \sum_{i=1}^{n-1}\,\sum_{j=1}^{n-i}\frac{(i+j)}{n}\mathcal{C}_{i}^{n}\mathcal{C }_{j}^{n-i}\] \[=\frac{1}{2n}\left[\sum_{i=1}^{n-1}i\,\mathcal{C}_{i}^{n}\sum_{j= 1}^{n-i}\mathcal{C}_{j}^{n-i}+\sum_{i=1}^{n-1}\,\mathcal{C}_{i}^{n}\sum_{j=1}^{ n-i}\,j\,\mathcal{C}_{j}^{n-i}\right]\] \[=\frac{1}{2n}\left\{\sum_{i=1}^{n-1}i\,\mathcal{C}_{i}^{n}\left[2 ^{n-i}-1\right]+\sum_{i=1}^{n-1}\,\mathcal{C}_{i}^{n}\left[(n-i)2^{n-i-1} \right]\right\}\] \[=\frac{1}{2n}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}\left\{\frac{i}{2 }\,2^{n-i}-i+\frac{n}{2}\,2^{n-i}\right\}=3^{n-1}-2^{n-1}\]
From the chemical perspective, this implies that a substance can participate at most in \(u(n)\) chemical reactions, that is to say, the maximum degree of a substance is \(u(n)\). A question opened by Lemma 1 was about the bounds for the size and degree of an oriented hypergraph. Lemma 6 provides the information to determine them.
**Lemma 7**.: _The size \(s(G)\) and degree \(d(G)\) of an oriented hypergraph is bounded by \(0\leq x(G)\leq n(3^{n-1}-2^{n-1})\), where \(x(G)\) stands for either \(s(G)\) or \(d(G)\)._
Proof.: Minimum size and minimum degree of an oriented hypergraph \(G\) are reached for the case of a hyperedge-less oriented hypergraph. Therefore, \(\min s(G)\) and \(\min d(G)=0\). The maximum value of these parameters is reached for the complete hypergraph. As \(\max d(G)\) corresponds to the sum of the degree of each vertex in the complete oriented hypergraph, this amounts to add the number of oriented hyperedges in which each vertex in the complete oriented hypergraph belongs. As \(3^{n-1}-2^{n-1}\) (Lemma 6) is the number of oriented hyperedges in which a vertex can belong in the complete oriented hypergraph, summing over all vertices yields \(\max d(G)=n(3^{n-1}-2^{n-1})\). As \(d(G)=s(G)\) (Lemma 1), then, \(\max s(G)=n(3^{n-1}-2^{n-1})\)
Thus, for the toy chemical space \(G\) depicted in Figure 2, \(s(G)=d(G)\in[0,76]\). As we found that these figures are equal to 12 for that chemical space, it is therefore observed how far the toy chemical space is from being a complete oriented hypergraph, with \(s(G)=d(G)=76\), and how close it is to be a hyperedge-less oriented hypergraph, with \(s(G)=d(G)=0\).
Lemma 6 also allows for determining the number of reactions housed by a complete oriented hypergraph.
**Lemma 8**.: _The number of oriented hyperedges for a complete oriented hypergraph is [9]_
\[u_{r}(n)=\frac{1}{2}(3^{n}-2^{n+1}+1). \tag{14}\]
Proof.: By the Lemma 3, it follows that
\[u_{r}(n) =\frac{1}{2}\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}u_{i,j}(n)=\frac{1}{2} \sum_{i=1}^{n-1}\sum_{j=1}^{n-i}\mathcal{C}_{i}^{n}\,\mathcal{C}_{j}^{n-i}=\frac {1}{2}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}\sum_{j=1}^{n-i}\mathcal{C}_{j}^{n-i}\] \[=\frac{1}{2}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}\left[2^{n-i}-1 \right]=\frac{1}{2}\sum_{i=1}^{n-1}\mathcal{C}_{i}^{n}2^{n-i}-\frac{1}{2}\sum_ {i=1}^{n-1}\mathcal{C}_{i}^{n}\] \[=\frac{1}{2}\left[3^{n}-2^{n}-1\right]-\frac{1}{2}\left[2^{n}-2 \right]=\frac{1}{2}(3^{n}-2^{n+1}+1)\]
This indicates that for the toy-example of Figure 2, the reactions indicated in the chemical space of four substances are only four reactions out of the 25 possible ones, which correspond to the upper or lower triangles of 1-entries above the main diagonal in the adjacency matrix shown in Table 1 plus the 0-entries in black font. They correspond to the black entries of the upper or lower triangles above the main diagonal in the left matrix of Figure 3.
Just to have an idea of the speed of growth of \(u_{r}\) regarding \(n\), for \(n=2\) to 5, \(u_{r}\) takes values 1, 6, 25, and 90. This growth is given by
\[\frac{du_{r}}{dn}=\frac{1}{2}(3^{n}\ln 3-2^{n+1}\ln 2) \tag{15}\]
This quantifies the speed of growth of the possible chemical space as a function of the number of substances of the space. It constitutes the upper bound of wiring of any chemical space, which sets the stage to contrast this upper bound with the historical record of the chemical space. This subject is explored in a forthcoming paper.
The number of possible reactions in the complete oriented hypergraph allows for determining the number of impossible reactions because of the disjoint condition of educts and products, which is given by
\[z(n) =(2^{n}-2)^{2}-\frac{1}{2}(3^{n}-2^{n+1}+1)\] \[=\frac{1}{2}(2\cdot 4^{n}-3^{n}-6\cdot 2^{n}+7), \tag{16}\]
which for \(n\) ranging from 2 to 5 yields \(z=3\), 30, 171 and 810. In fact,
\[\frac{dz}{dn}=\frac{1}{2}(4^{n}\cdot\ln 16-3^{n}\cdot\ln 3-2^{n}\cdot\ln 64), \tag{17}\]
which corresponds to the speed of growth of red 0-entries in any adjacency matrix \(\mathbf{M}\).
When comparing the number of possible reactions (Equation 17) with the number of impossible reactions (Equation 17), we observe that the former grows much slower than the latter. This pattern is observed in Figure 3 for different values of \(n\).12
Footnote 12: A further question is how many of the possible reactions are actually realised by chemists in the chemical space. This is a subject we address in a forthcoming paper.
Equipped with the results of this section, we proceed to develop the Erdos-Renyi model for oriented hypergraphs.
Erdos-Renyi model for oriented hypergraphs
Although the literature on Erdos-Renyi-like models for hypergraphs goes back at least 20 years [33, 34, 35, 36, 37, 38, 39, 5, 40, 41, 42], most of those models are devoted to uniform hypergraphs, while a few of them to non-uniform ones [36, 37, 38, 39]. By uniform hypergraphs we mean hypergraphs whose hyperedges have the same cardinality. Some of those studies explore the statistical and mathematical properties of substructures embedded in random hypergraphs [40, 41, 42]. In general, none of those results addresses the particular case of oriented hypergraphs, which is the model we develop in this section.
Given a set of vertices \(V\), the random oriented hypergraph corresponds to the realisation, or not, of every possible oriented hyperedge on \(V\). The probability of realisation of these hyperedges is given by \(p\). Like in the Erdos-Renyi model for graphs, the random Erdos-Renyi oriented hypergraph can be thought as the result of an algorithm that takes every possible couple of disjoint hypervertices on \(V\) and decides whether to link them or not. The decision depends on generating a random number, and if that number happens to be less than a predetermined value, denoted as \(p\), then the hypervertices are connected; otherwise, they remain disconnected.
**Definition 4**.: _An Erdos-Renyi random oriented hypergraph \(G(n,p)\), is an oriented hypergraph with \(n\) vertices whose oriented hyperedges result from linking hypervertices with probability \(p\)._
This random process leads to particular kinds of probability mass functions for the quantities described in previous sections such as degree and size of oriented hyperedges. This results from the mathematical consistency of the Erdos-Renyi model here presented. The expressions for the probability mass functions are provided in the remaining part of this section. To begin with, the number of reactions \(R\) is a binomially distributed random variable, \(R\sim\mathrm{B}(u_{r},p)\), with probability mass function given by
**Proposition 1**.: _The probability of having a \(G(n,p)\) with \(R\) oriented hyperedges is_
\[\text{Pr}(R=r)=\left(\begin{array}{c}u_{r}\\ r\end{array}\right)\,p^{r}\,\left(1-p\right)^{u_{r}-r}, \tag{18}\]
_which results from realising \(r\) reactions and, therefore, of having \(u_{r}-r\) non-realised reactions. The expected value of the number of reactions in \(G(n,p)\) is given by_
\[\text{E}[R]=\sum_{r=0}^{u_{r}}r\,\text{Pr}(R=r)=p\,u_{r}, \tag{19}\]
_where \(u_{r}\) is given in Equation (14)._
This implies that the expected number of reactions in a random oriented hypergraph is proportional to the maximum number of possible reactions. The actual number is weighted by the probability \(p\).
As we discuss in Definitions 2 and 3 and in Lemma 1, size and degree of an oriented hypergraph become important proxies to determine how tight or sparse is a chemical space. The random model naturally links the tightness of a chemical space with the probability of wiring the associated oriented hypergraph. Thus, random processes based on high values of \(p\) lead to high size and degree oriented hypergraphs, while processes underlying low \(p\) figures, necessarily lead to sparse oriented hypergraphs with small sizes and degrees.
The role of \(p\) is also central for the probability of observing a given number of reactions of a particular size (Remark 1).
**Remark 1**.: _The number of oriented hyperedges with size \(s\) is a binomially distributed random variable, \(R_{s}\sim B(u_{s},p)\), such that its probability mass function is given by_
\[\text{Pr}(R_{s}=r_{s})=\left(\begin{array}{c}u_{s}\\ r_{s}\end{array}\right)\,p^{r_{s}}\,\left(1-p\right)^{u_{s}-r_{s}}, \tag{20}\]
_which results from considering that there are \(r_{s}\) realised reactions of size \(s\) and \(u_{s}-r_{s}\) non-realised reactions with the same size \(s\). As a result, the expected value of the number of oriented hyperedges of size \(s\) is_
\[E[R_{s}]=\sum_{r_{s}=0}^{u_{s}}r_{s}\text{Pr}(R_{s}=r_{s})=pu_{s}. \tag{21}\]
_This leads to determining the probability of having a reaction with size \(s\). This probability \(P(s)\) is given by the ratio of the expected number of reactions with size \(s\) (\(E[R_{s}]\)) and the sum of the total number of expected reactions for the different sizes_
\[P(s)=\frac{E[R_{s}]}{\sum_{R_{s}=2}^{u_{s}}E[R_{s}]}=\frac{u_{s}}{u_{r}}. \tag{22}\]
_Hence, \(P(s)\) corresponds to the ratio between \(u_{s}\), the number of reactions with size \(s\), and the number of possible reactions \(u_{r}\), where \(u_{s}\) and \(u_{r}\) are given in Lemmas 4 and 8, respectively. Remarkably, this probability \(P(s)\) associated with the size \(s\) is \(p\)-independent in the random model here defined.13_
Footnote 13: This indicates that, in the case of a phase transition for this model, the probability \(P(s)\) cannot be altered by the criticality.
Finally, another implication of the random model is that the vertex degree is also a random variable as stated in the following Remark.
**Remark 2**.: _The vertex degree is a binomially distributed discrete random variable, \(D\sim\text{B}(u_{n},p)\), with probability mass function of the form_
\[\text{Pr}(D=d)=\left(\begin{array}{c}u_{n}\\ d\end{array}\right)\,p^{d}\,\left(1-p\right)^{u_{n}-d}, \tag{23}\]
_which again, results from having \(d\) realised reactions for an arbitrary substance out of the \(u_{n}\) reactions in which the substance can participate, and \(u_{n}-d\) non-realised reactions. Therefore, the expected value of the vertex degree is_
\[\text{E}[D]=\sum_{d=0}^{u_{n}}d\,\text{Pr}(D=d)=p\,u_{n}, \tag{24}\]
_where \(u_{n}\) is given in Equation 13._
Equations 19, 21, 22 and 24 show that our model is conceptually correct. As a consequence, in any test for randomness of real data modelled as an oriented hypergraph, each of the probability distributions discussed here should be statistically close to those given in the aforementioned equations, if the data are randomly generated. On the other hand, as the expected number of reactions of the random oriented hypergraph is related to the expected number of reactions a given substance has, we obtain the following expression for the ratio of those two variables
\[\frac{\text{E}[R]}{\text{E}[D]}=\frac{1}{2}\frac{\left[1-2\left(\frac{2}{3} \right)^{n}+3^{-n}\right]}{\left[\frac{1}{3}-\frac{1}{2}\left(\frac{2}{3} \right)^{n}\right]}, \tag{25}\]
which, for a large number of substances leads to
\[\lim_{n\to\infty}\frac{\mathrm{E}[R]}{\mathrm{E}[D]}=\frac{3}{2}. \tag{26}\]
That is, if a chemical space is randomly wired, the number of reactions it houses is \(3/2\) the number of reactions in which any substance of the space participates. Therefore, the ratio \(\mathrm{E}[R]/\mathrm{E}[D]\) can be used to test whether a given chemical space is close to randomness or not. This result clearly contrast with its Erdos-Renyi analog for simple graphs which takes the form
\[\frac{\mathrm{E}[R]}{\mathrm{E}[D]}=\frac{n}{2}, \tag{27}\]
and which grows linearly with the number of substances. In the case of the chemical space oriented hypergraph, the factor \(3/2\) is actually an upper bound to the ratio \(\mathrm{E}[R]/\mathrm{E}[D]\).
Aiming at having more insight on the effect of \(p\) upon the relation between the number of substances (\(n\)) and the expected number of reactions (\(\mathrm{E}[R]\)), we explore different forms \(p\) might take. They range from the case of a constant number of reactions, independent of the number of substances (\(\mathrm{E}[R]\sim k\)); or from a simple linear relation \(\mathrm{E}[R]\sim kn\); to more complex relations in which the expected number of reactions varies according to a power of the number of substances (\(\mathrm{E}[R]\sim n^{\alpha}\)); or even that the number of reactions grows exponentially with the number of substances (\(\mathrm{E}[R]\sim b^{n}\)). To do so, we analyse some cases of chemical spaces for which different values of \(n\) and \(p\) are considered.
From Equation 19 we know that for large values of \(n\), \(\ln\mathrm{E}[R]\) takes the form14
Footnote 14: It is known that for the actual chemical space \(n\sim 10^{6}\)[24].
\[\ln\mathrm{E}[R]\sim\alpha\ln n+n\ln\left(\frac{3}{\beta}\right), \tag{28}\]
where the above discussed chemical spaces are generalised by considering a probability given as
\[p=n^{\alpha}/\beta^{n}. \tag{29}\]
With \(\beta=3\), Equation 28 becomes
\[\ln\mathrm{E}[R]\sim\alpha\ln n, \tag{30}\]
which is a linear relation in a log-log scale with \(\alpha\) encoding the slope of the linear trend. When \(\alpha=0\), \(\mathrm{E}[R]\sim 1/2\), in which case \(p=1/3^{n}\) and no matter how large the number of substances in the chemical space is, the number of reactions reported by chemists is always a fixed number. \(\mathrm{E}[R]\sim n\) is obtained with \(\alpha=1\), where \(p=n/3^{n}\). This linear relation between \(\mathrm{E}[R]\) and \(n\) indicates that chemists manage wiring the space in a manner that is proportional to the available substances. \(\mathrm{E}[R]\sim n^{\alpha}\) is obtained with \(p=n^{\alpha}/3^{n}\). If \(\alpha>1\), the greater the value of \(\alpha\), the more reactions are discovered. The scenario with \(\alpha<0\) yields a sparse chemical space with a decreasing power law relation in which the more substances, the less reactions are discovered. These behaviours are shown in Figure 4a for \(\alpha=-1,0,1\) and \(2\).
In turn, \(\mathrm{E}[R]\sim b^{n}\), actually \(\mathrm{E}[R]\sim 3^{n}\) is reached with \(\beta=1\), in which case \(p=n^{\alpha}\) and the leading order of Equation 28 gives
\[\ln\mathrm{E}[R]\sim n\ln\left(3\right). \tag{31}\]
Hence, the log-plot of the \(\mathrm{E}[R]\) as a function of \(n\) depicts a constant slope for different values of \(\alpha\), as can be seen in Figure 4b. This latter result follows from the fact that for large values of \(n\) and fixed and small values of \(\alpha\), the first term is negligible (note that \(\alpha\leq 0\) to secure that \(0\leq p\leq 1\)).
These results, besides their importance for the analysis of chemical spaces, pave the way for the exploration of phase transitions in Erdos-Renyi-like oriented hypergraphs. In this respect, although chemical spaces with \(\mathrm{E}[R]\ll 1\) lack chemical meaning,15 they turn interesting to analyse the aforementioned phase transitions.
Footnote 15: Which occur for low values of \(n\) in Figure 4.
The probability of triggering a reaction not only affects the number of reactions but also the size of those reactions in the chemical space. Therefore, we explore how the different values of \(p\), given in Equation 29, affect the number of reactions of different sizes. From Equation 21 we know that the expected number of reactions of size \(s\) (\(\mathrm{E}[R_{s}]\)) is given by \(pu_{s}\). That is, \(\mathrm{E}[R_{s}]\) results from the probability of realising or not reactions of size \(s\) in the complete oriented hypergraph. By operating on the expressions for \(u_{s}\) (Lemma 4), we found, for large values of \(n\) and with \(\beta=1\), that
\[\ln\mathrm{E}[R_{s}]\sim(s+\alpha)\ln n, \tag{32}\]
where we used the binomial coefficient approximation for large values of \(n\) and fixed (small) values of \(s\) together with the Stirling approximation [43]. This expression is similar to Equation 30 in what it shows three regimes attending to the value of slope \(s+\alpha\).
Recalling that \(\alpha<0\) to guarantee that \(0\leq p\leq 1\), this result indicates that in a random chemical space where the number of reactions grows exponentially with the number of substances (Equation 31), the number of reactions with size either \(s\) drops, remains constant or increases, depending on whether the slope \(s+\alpha\) is negative, null or positive, respectively. The general expression for large values of \(n\) is given by \(\mathrm{E}[R_{s}]=n^{s-|\alpha|}\). For example, if \(\alpha=-2\), rearrangement reactions (\(s=2\)) remain constant with the number of available substances \(n\), while reactions with size \(s>2\) follow a power law \(\mathrm{E}[R_{s}]\sim n^{s-2}\). Given that the smallest value of \(s\) is \(2\), for \(\alpha=-2\) there is no possibility of a negative slope. However, for a different value, for example \(\alpha=-4\), reactions with size \(s<4\) drop following a power law and give rise to a sparse population of reactions with those sizes. In Figure 5a we show an example for \(\alpha=-2\), \(\beta=1\) and \(s=2,3,4\) and \(5\).
As \(\beta\) may take also the value of \(3\) for chemical spaces with either linear or power-law growth of the number of reactions, for these cases the leading order of \(\ln\mathrm{E}[R_{s}]\) takes the form
\[\ln\mathrm{E}[R_{s}]\sim-n\ln 3, \tag{33}\]
which shows that the asymptotic behaviour of \(\mathrm{E}[R_{s}]\) for large values of \(n\) is a decreasing exponential \(\mathrm{E}[R_{s}]\sim 1/3^{n}\) in terms of the number of substances \(n\) and for any size \(s\), \(s\) being much smaller than \(n\).16 However, the number of reactions with a fixed size \(s\), \(\mathrm{E}[R_{s}]\), reaches a maximum value at a number of substances \(n_{max}\) given by the solution to the equation
Footnote 16: It is known that \(s\ll n\) for actual chemical spaces [24].
\[\frac{\alpha}{n_{max}}+\ln\left[\frac{n_{max}}{3(n_{max}-s)}\right]=0 \tag{34}\]
where again, we used the Stirling approximation for large values of \(n\)[43]. This implies that in a random linear or power-law wiring of the chemical space, the number of
reactions of size \(s\) reaches its maximum population with spaces of \(n=n_{max}\) substances (Figure 5b). This result shows another facet of randomly wired chemical spaces. In particular, that large randomly wired spaces are mainly populated by reactions of large sizes, where reactions of small size only represent a small population of the bulk of the population.
Figure 4: Effects of the probability of triggering chemical reactions upon the expected number of reactions of randomly wired chemical spaces. Probability is expressed as \(p=n^{\alpha}/\beta^{n}\) and the plots show how the expected number of reactions \(\mathrm{E}[R]\) varies with the selection of \(\alpha\) and \(\beta\). In a) \(\beta=3\) and \(\alpha\) takes different values, which show the decreasing power law (\(\alpha=-1\)), the constant (\(\alpha=0\)), linear (\(\alpha=1\)) and quadratic (\(\alpha=2\)) growth of \(\mathrm{E}[R]\) for large values of the number of substances \(n\). In all these chemical spaces, where \(\beta=3\), \(\alpha\leq(n\ln 3)/(\ln n)\) to warranty that \(0\leq p\leq 1\). In b) \(\beta=1\) and \(\alpha\leq 0\) to secure that \(0\leq p\leq 1\). These plots correspond to exponential-like growths of \(\mathrm{E}[R]\) for large values of \(n\), where the slope of the linear fit tend to \(\ln 3\approx 1.099\). Plots in a) and b) were obtained for different values of \(n\) in \(\mathrm{E}[R]=\frac{n^{\alpha}}{2\beta^{n}}(3^{n}-2^{n+1}+1)\).
reactions. This suggests that actual chemical spaces, mainly populated by reactions of small size [24], are indeed far from a random wiring.
So far, we have compared the values of \(\mathrm{E}[R_{s}]\) for different realisations of a random chemical space. To explore how the number of reactions with different sizes \(s\) relates each other within the same chemical space, we now fix \(n\) and note that, for \(\beta=1\) the probability of wiring reactions, \(p=1/n^{\alpha}\) is much larger than \(p=n^{\alpha}/3^{n}\), the wiring probability with \(\beta=3\) with \(\alpha\) fixed in both cases. Consequently, the number of
Figure 5: Effects of the probability of triggering reactions upon the expected number of reactions of different sizes in a randomly wired chemical space. Probability is given by \(p=n^{\alpha}/\beta^{n}\). a) Behaviour at \(\alpha=-2\) and \(\beta=1\), which corresponds to a chemical space whose number of reactions exponentially expand with the number of substances only in case \(s>|\alpha|\). b) Distribution of number of reactions of size \(s=50,100,150\) and \(200\) for chemical spaces with \(\alpha=2\) and \(\beta=3\), corresponding to spaces whose number of reactions grows at a power law of the number of substances. Maximum values are given at \(n_{max}=76,151,226\) and \(301\) respectively.
reactions \(\mathrm{E}[R]\) is much larger in the first case than in the second, see Figure 4. This implies that the number of reactions with a given size \(s\) follows the same trend, that is to say, the number of reactions with size \(s\), \(\mathrm{E}[R_{s}]\), is larger when the probability is given by \(p=1/n^{\alpha}\) compared with what would be the number of reactions of the same size for a lower probability \(p=n^{\alpha}/3^{n}\), see Figure 6. Additionally, as long as \(p\) is considered to be \(s-\)independent, for a given number of substances \(n\), \(\mathrm{E}[R_{s}]\) reaches a maximum at a \(p\)-independent size value \(s_{max}\). The value of the most populated size \(s_{max}\) is the solution to the equation (see Figure 6):
\[\frac{2^{s_{max}-1}\ln 2}{2^{s_{max}-1}-1}+\ln\left(\frac{n-s_{max}}{s_{max}} \right)=0, \tag{35}\]
where we used the Stirling approximation for large values of \(n\)[43].
## 5 Conclusion and outlook
We developed the Erdos-Renyi model for oriented hypergraphs with a fixed number \(n\) of vertices. Oriented hypergraphs result from the binary relations between sets of arbitrary size of vertices (hypervertices). In particular, we considered oriented hypergraphs where the related hypervertices are disjoint. This follows from our aim of modelling all possible chemical reactions, that is catalysed and non-catalysed reactions, with the former including autocatalytic reactions.
Central for the Erdos-Renyi model is the complete oriented hypergraph, as the model randomly realises oriented hyperedges of the complete oriented hypergraph. This realisation is mediated by a probability \(p\). We analysed different functional forms of \(p\), which
Figure 6: Effects of the probability of triggering reactions upon the expected number of reactions of different sizes in randomly wired chemical spaces with a fixed number of substances, \(n=100\) and wiring probability given by \(p=n^{\alpha}/\beta^{n}\). Behaviour for different values of \(\alpha\) and \(\beta\). For \(\beta=3\), \(\alpha=0,1,2\) and \(3\), while for \(\beta=1\), \(\alpha=-3,-2,-1\) and \(0\). The maximum values of \(\mathrm{E}[R_{s}]\) are given at \(s_{max}=67\).
depend on \(n\), and that allow for constant, linear, power law and exponential behaviours of the number of oriented hyperedges as a function of \(n\). These forms of \(p\), as well as the trends and their effects upon the number of oriented hyperedges constitute an approach to determine whether a given oriented hypergraph follows the patterns of a random wired oriented hypergraph.
The application motivating this study is the chemical space, which we model as an oriented hypergraph, where vertices correspond to substances and oriented hyperedges to chemical reactions. Two main reasons turn oriented hypergraphs as a suitable model for chemical spaces, or in general, sets of chemical reactions. First, hypervertices, which are the relata of the oriented hypergraph model, gather together the substances involved in a chemical reaction. Therefore, hypervertices turn instrumental to distinguish substances involved in a chemical transformation from those which do not. Second, oriented hyperedges, that is binary collections of hypervertices, encode a fundamental aspect of chemical reactions, namely the distinction between educts and products.
The Erdos-Renyi model here formulated is central for chemical studies since the availability of chemical information in electronic form, spanning centuries of chemical activity, as well as the ever growing computational power, have made possible to study the evolution of the entire chemical space [24, 9]. This has led to results on the growth rate of the space, as well as to analyses of its diversity at the compositional and molecular structural levels, as well as at the reaction type level [24, 9, 25, 44, 45]. Despite the importance of these studies, an open question is whether the chemical space has a particular structure and how far, or close, such a structure lies from its random counterpart [27]. This boils down to the question whether chemical space has been randomly wired in any moment of its expansion, which poses further questions on the nature of chemistry as a discipline [27]. At any rate, whether chemical space is far or close to randomness requires quantifying the distance between the actual chemical space at a particular time and its random case. The Erdos-Renyi model, therefore, turns central for such a quantification, which is the subject of a forthcoming publication. In this respect, by analysing the number of reactions involving a certain number of substances in a randomly wired chemical space, we found evidence that the actual chemical space seems to depart from a random wiring, as random spaces with similar number of substances of those in the actual chemical space are mainly populated by reactions involving by far more than a handful of substances. This latter is the typical situation of actual chemical reactions. The same approach of exploring the distribution of reactions involving certain number of substances over time turns central to analyse whether there are subspaces of the chemical space of rather small number of substances that are close to a random wiring. A similar argument may be used to analyse the random wiring of spaces going beyond the traditional two or three educts of chemical reactions [24], which is one of the objectives of one-pot reactions [46], for instance, which aim at affording a circular and green chemistry.
As recently discussed [28], oriented hypergraphs become suitable models to encode the molecular reality of reactions if every oriented hyperedge is conservative. That is, if its associated stoichiometric matrix, which encodes the actual amount of substances involved in the reaction, has a strictly positive reaction invariant. This mathematical condition implies that a chemical reaction must preserve the number of atoms and, therefore, mass. The condition imposed by the stoichiometric matrix triggers further questions. For instance, one might inquire into the frequency with which the random model fulfills the given criterion as a function of the number of oriented hyperedges. Our model can be extended to incorporate such stoichiometric analyses but it implies several changes in the adjacency matrix, which plays a fundamental role in obtaining most of the expressions used here. Such a modification requires further investigation and it is beyond the scope of the current paper.
The Erdos-Renyi model is general enough to be applied to non-chemical systems modelled by oriented hypergraphs, which include description of logical and artificial intelligence processes, as well as database modelling [23]. Particular examples include the case of functional dependencies in relational databases [47] or the analysis of Horn formulae in logic [48] and the study of AND-OR graphs [49].
While developing the Erdos-Renyi model, we introduced concepts of further interest for the study of oriented hypergraphs such as size and degree of oriented hyperedges, which we extended to the size and degree of the entire oriented hypergraph structure made by the collection of oriented hyperedges. In chemical terms, we defined the size and degree of any reaction and we extended these concepts to the size and degree of arbitrary chemical spaces. The size of an oriented hyperedge corresponds to the number of vertices belonging in the hyperedge, while the degree of the oriented hyperedge accounts for the number of oriented hyperedges incident to the vertices of the oriented hyperedge of reference. In chemical terms, the size of a reaction accounts for the number of substances in the reaction and the degree of the reaction for the number of reactions in which substances of the reaction in question participate. We showed that the size and degree of an oriented hypergraph are equal. They indicate whether an oriented hypergraph is sparse or loosely connected, which occurs for structures with low size and degree values, that is close to 0. In contrast, oriented hypergraphs with high values, close to the upper bound of size and degree (\(n(3^{n-1}-2^{n-1})\)) turn to be tightly connected structures.
By analysing the complete oriented hypergraph for \(n\) vertices, we found that the maximum number of oriented hyperedges incident to any vertex is \(3^{n-1}-2^{n-1}\), which in chemical terms amounts to determining the maximum number of reactions for a substance in a chemical space. This expression evidences the extremely large possibilities for wiring the chemicals space [9]. This result led to find, as discussed before, that the size and degree of any oriented hypergraph are restricted to the interval \([0,\,n(3^{n-1}-2^{n-1})]\). The extreme wiring possibilities of a chemical space of \(n\) substances are embodied in the maximum number of reactions a chemical space may hold, which turns out to be \(\frac{1}{2}(3^{n}-2^{n+1}+1)\). This is the number of oriented hyperedges of the complete oriented hypergraph. The aforementioned result strongly contrast with the \(n(n-1)/2\) edges of a graph. The fact that the maximum number of reactions a chemical space may hold is proportional to \(3^{n}\), when modelled as an oriented hypergraph, contrasts sharply with the just \(n^{2}\) edges of the models of the chemical space based on graphs. These huge differences between the number of oriented hyperedges and edges are the result of the much richer description of chemical reactions provided by the oriented hypergraph model, where the two sets of substances involved in chemical reactions (educts and products) are an explicit part of the model, whereas in the graph model, they are just disregarded.
The number of reactions in the complete oriented hypergraph allowed for determining the speed of growth of its oriented hyperedges as a function of the number of vertices (\(du_{r}/dn\)). We found that \(du_{r}/dn=\frac{1}{2}(3^{n}\ln 3-2^{n+1}\ln 2)\). This result bears important implications for chemistry, as it provides the upper bound for the growth of the number of chemical reactions as a function of the number of available substances in the chemical space. This allows for determining that the expected number of reactions of a random oriented hypergraph is given by \(\frac{p}{2}(3^{n}-2^{n+1}+1)\) and for deriving similar expressions for the expected number of reactions of size \(s\). These results, in turn, allow to contrast actual chemical spaces with random chemical spaces at different levels of detail, for instance by analysing the actual and expected number of reactions of particular sizes. An important invariant of random oriented hypergraphs we found is the ratio of the expected number of reactions and the expected degree or size of the chemical space. For large values of \(n\), we found this ratio is \(3/2\), which becomes a proxy to determine whether a given chemical space is random of not, according to the Erdos-Renyi model
here presented.
Despite the richness of the oriented hypergraph for modelling chemical spaces, it is open to improvements. For instance by introducing direction between the sets of substances involved in reactions, which would clearly distinguish between educts and products. In this case, chemical spaces are modelled as directed hypergraphs. The Erdos-Renyi model here presented requires further refinements to be adjusted for directed hypergraphs. They involve, for instance, to adjust \(p\) to distinguish between \(X\to Y\) and \(Y\to X\), with \(X\) and \(Y\) being sets of substances (hypervertices). We discussed the functional form of \(p\) as depending of \(n\), but other forms of \(p\) are also worth exploring, for instance as a function of reaction size (\(s\)). This leads to explore how the wiring of random chemical spaces depends on the amount of substances involved in their chemical transformations, which is a determining factor of the chemical possibilities to trigger actual chemical reactions [50].
Besides the Erdos-Renyi model for the high-order structures discussed in this paper, namely hypergraphs, as well as directed and oriented hypergraphs, other models need to be defined upon them, for instance the small-world [51], as well as the Barabasi-Albert [52] ones. As well as in the original Erdos-Renyi model, were phase transitions and the conditions to attain them were studied, a further avenue of research is the study of those conditions to afford phase transitions in higher-order structures as the ones here presented.
## 6 Author contributing
AG-C conducted the research, AG-C and GR conceptualised the project, GR wrote the paper and AG-C, MBM, PFS, JJ and GR discussed and reviewed the edited document.
## 7 Conflicts of Interest Statement
We have no competing interests.
## 8 Funding
MBM thanks the support of the Alexander von Humboldt Foundation.
## 9 Acknowledgments
The authors thank the feedback from Guido Montufar and Humberto Laguna upon early results of this project.
|
2309.09704 | Interface stabilization in adhesion caused by elastohydrodynamic
deformation | Interfacial instabilities are common phenomena observed during adhesion
measurements involving viscoelastic polymers or fluids. Typical probe-tack
adhesion measurements with soft adhesives are conducted with rigid probes.
However, in many settings, such as for medical applications, adhesives make and
break contact from soft surfaces such as skin. Here we study how detachment
from soft probes alters the debonding mechanism of a model viscoelastic polymer
film. We demonstrate that detachment from a soft probe suppresses
Saffman-Taylor instabilities commonly encountered in adhesion. We suggest the
mechanism for interface stabilization is elastohydrodynamic deformation of the
probe and propose a scaling for the onset of stabilization. | Preetika Karnal, Yumo Wang, Anushka Jha, Stefan Gryska, Carlos Barrios, Joelle Frechette | 2023-09-18T12:16:22Z | http://arxiv.org/abs/2309.09704v1 | # Interface stabilization in adhesion caused by elastohydrodynamic deformation
###### Abstract
Interfacial instabilities are common phenomena observed during adhesion measurements involving viscoelastic polymers or fluids. Typical probe-tack adhesion measurements with soft adhesives are conducted with rigid probes. However, in many settings, such as for medical applications, adhesives make and break contact from soft surfaces such as skin. Here we study how detachment from soft probes alters the debonding mechanism of a model viscoelastic polymer film. We demonstrate that detachment from a soft probe suppresses Saffman-Taylor instabilities commonly encountered in adhesion. We suggest the mechanism for interface stabilization is elastohydrodynamic deformation of the probe and propose a scaling for the onset of stabilization.
+
Footnote †: preprint: APS/123-QED
There is a wide interest in controlling interfacial instabilities, as they often affect the process in which they are formed.[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11] Interfacial instabilities can be a safety hazard for batteries,[12; 13] limit oil recovery,[14] impact properties of graphene sheets,[15] enhance the mixing of fluids,[16; 17] or guide the fabrication of soft materials[18; 19; 20]. A common interfacial instability is the Saffman-Taylor type, manifested as undulating patterns formed in narrow gaps at fluid-fluid interfaces when a lower viscosity fluid displaces a higher viscosity fluid.[21; 22; 23; 24; 25] Their onset can be controlled through low flow rates[25] or local geometry [26; 1; 2]. For example, elastic deformation of a membrane ahead of the fluid-fluid front alters the flow and suppress viscous instabilities.[27; 10] Due to their sensitivity to the flow profile, interfacial instabilities could potentially be manipulated in contact problems, such as in adhesion, where they are a source of energy dissipation.[28; 29; 30; 24]
Adhesion between two soft materials is ubiquitous during contact with skin with medical adhesives or flexible electronics.[32; 33; 34; 35; 36; 37] Despite its technological significance, studies of adhesion between two soft materials are limited, but reveal qualitative differences from debonding from a rigid surface.[38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 291; 300; 311; 329; 333; 340; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 371; 372; 373; 374; 375; 376; 378; 379; 38; 389; 390; 391; 392; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 61; 62; 64; 66; 67; 69; 628; 63; 65; 67; 68; 69; 70; 72; 73; 74; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 88; 89; 91; 88; 87; 88; 89; 929; 93; 940; 95; 96; 97; 98; 99; 101; 11; 122; 133; 134; 135; 136; 137; 138; 139; 141; 143; 144; 145; 146; 147; 148; 159; 167; 178; 189; 199; 202; 213; 224; 25; 268; 279; 281; 293; 294; 295; 296; 297; 301; 31; 32; 33; 341; 35; 36; 37; 38; 398; 40; 41; 42; 43; 44; 45; 46; 47; 49; 51; 53; 54; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 910; 11; 124; 15; 16; 17; 18; 19; 191; 18; 192; 193; 194; 195; 196; 197; 198; 199; 203; 210; 211; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 35; 36; 37; 38; 39; 40; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 54; 56; 57; 59; 62; 63; 64; 65; 66; 67; 68; 69; 71; 80; 81; 83; 84; 85; 86; 87; 89; 92; 94; 95; 96; 97; 98; 102; 99; 103; 11; 11; 13; 14; 15; 16; 17; 18; 199; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 79; 82; 83; 85; 87; 89; 93; 94; 95; 96; 97; 98; 99; 99; 100; 11; 12; 13; 14; 15; 16; 17; 18; 19; 199; 130; 197; 198; 199; 203; 1999; 210; 211; 23; 24; 25; 26; 27; 28; 29; 32; 33; 35; 36; 37; 38; 39; 40; 41;
viscoelastic adhesive (thickness of \(b\) = 25 \(\mu\)m, Young's modulus \(\sim\)30 kPa, **Fig. 1A**). The adhesion measurements are conducted on a microscope with bottom and side view imaging.[46] During detachment, the adhesive-air interface is unstable and fingers form and grow until complete debonding (**Fig. 1C**). A distinguishing feature of interfacial instabilities in adhesion is the dependence of their wavelength, \(\lambda\), on the detachment velocity. For a Saffman-Taylor instability \(\lambda\) scales with the film thickness (\(b\)) and the Capillary number (\(Ca=\eta^{*}U/\gamma\)) as:
\[\lambda=\pi b/\sqrt{Ca} \tag{1}\]
where \(\eta^{*}\) is the complex viscosity of the adhesive, \(U\) is the radial velocity, and \(\gamma\) is the surface tension of the adhesive-air interface.[25, 27, 23, 28] The complex viscosity accounts for the viscoelasticity of the adhesive. We measured adhesion for different detachment velocities and film thicknesses (25 -100 \(\mu\)m) and characterized fingering wavelengths at their onset (lowest strain in the films). We then compared our measurements to **Eqn. (1)** by determining the capillary number using the radial velocity of growing fingers' apex, the complex viscosity \(\eta^{*}\), and the surface tension (45 \(\pm\) 2 mN/m).[31, 47] Agreement between data and **Eqn. (1)**, **Fig. 1B**, confirms the presence of Saffman-Taylor instabilities[31]. In contrast, an elastic instability in the PSA would have a wavelength that only depends on the thickness of the adhesive (\(\lambda_{e}\) =4b)[31, 48, 49, 50, 51], and quadruple as we quadruple the film thickness. Instead, the change in wavelength if we quadruple the thickness increases by a factor of 3-12 dependending on the velocity, with the wavelength decreasing as the velocity increases, both characteristic of Saffman-Taylor instabilities.
We then repeat the same measurements, but with silicone probes of increasing compliance. The compliant probes are made of PDMS (polydimethyl siloxane) of different crosslinking ratios that were extracted after curing to remove unreacted oligomers, and treated with plasma to render their surface hydrophilic. The soft probes have nearly identical geometry and surface energy as the rigid probes, but with a Young's modulus that varies from \(\sim\)2 MPa to \(\sim\)0.2 MPa.[31] We estimate the \(Ca\) of the adhesive film during the detachment and found it comparable to values for rigid probes that displayed Saffman-Taylor instabilities (red arrows, **Fig. 1B**). Detachment with the stiffer PDMS leads to an unstable interface, but the interface stabilizes for softer probes (**Fig. 1D,E**).
Due to confinement, the compliance of the adhesive film is smaller than its bulk counterpart, and also smaller than all soft probes investigated.[31] While a PSA is a viscoelastic solid, a simple stress-strain model where the thin adhesive film is in series with a soft probe (\(k_{\text{PSA}}\gg k_{\text{probe}}\)) suggests a significant dissipative response due to the complex viscosity of the adhesive.[31] Therefore, even if the adhesive film is a solid its dynamic response is dominated by viscoelasticity. Moreover, recent work shows that in the case of elastic instabilities the interface can become stable as the probe modulus increases, the opposite of our observations.[52].
As the probe compliance increases, the interface becomes stable during detachment (**Fig. 1C-1E**). Because only the compliance of the probe is varied (and not its surface energy), the experiments suggest the importance of compliance on interface stabilization.[31] The transition to a stable interface also has no impact on the adhesive strength (\(F_{\text{max}}\) in **Fig. 2 Inset** ). For the same debonding velocity the adhesive strength is nearly the same for all probe moduli, without any distinction between stable and unstable interfaces. For the sphere-plane geometry the adhesive strength is independent of compliance, but the mode of failure can affect the force profile.[53, 54] A small plateau in force was also observed with the onset of fingering instabilities when lifting rigid plates confining viscous fluid,[29], whereas adhesion-induced elastic instabilities increased the resistance to deformation leading to higher forces.[55]
We also find that stabilization of the interface is not due to a change in probe surface energy. The relationship between the adhesive strength (\(F_{\text{max}}\)), debonding velocity (\(v\)), and compliance is well-established and given by:
\[F_{\text{max}}=2\left[\frac{A_{0}}{C_{\text{sys}}}G_{0}\left(\frac{v}{v_{ref}} \right)^{n}\right]^{\frac{1}{2}}, \tag{2}\]
Figure 2: Adhesive strength for different probes and retraction velocities. The slope \(\sim\left(2\sqrt{G_{0}/v_{\text{ref}}^{0,4}}\right)\) increases with probe surface energy. There is no distinction in the adhesive strength for a stable (pink) or unstable (blue) interface. Inset: Debonding curve between soft PDMS probes and adhesive films at \(v\) = 50\(\mu\)m/s. Increase in probe compliance leads decreases the slope. The maximum force, \(F_{\text{max}}\), is independent of probe modulus.
where \(G_{0}\) is the intrinsic strain energy release rate, \(A_{0}\) is the maximum contact area, \(C_{\text{sys}}\) is the system compliance, \(v\) is the debonding velocity, and \(n\) is an empirical constant, here \(n=0.4\).[53; 54; 56; 55] Therefore, for a constant apparent work of adhesion we expect a linear relationship between \(F_{\text{max}}\) and \(\sqrt{A_{0}/C_{\text{sys}}v^{0.4}}\) with a slope \(2\sqrt{G_{0}/v_{\text{ref}}^{0.4}}\). Adhesion follows well the established force scaling relationship, with no departure from the linear relationship that would indicate a change in surface energy for softer PDMS probes. Data for the hydrophilic PDMS includes the adhesive strength for probes with elastic moduli between 0.18 and 1.8 MPa (**Fig. 2**).[31] The linear relationship observed across PDMS probe moduli confirms the constant apparent surface energy. This linear relationship also holds for probes of different surface energy, but with a different slope (silica and hydrophobic PDMS, **Fig. 2**).
Side view imaging shows that transition to a stable interface is accompanied by significant elastic deformation of the probe, **Fig. 3**. The forces resisting the probe's upward motion within the adhesive film cause elastic deformation of the probe and appear to be stabilizing the interface. For hydrophilic PDMS probes, interface stabilization occurs despite having the same intrinsic surface energy. Using \(G_{0}/E_{\text{eff}}a\) we evaluate the intrinsic strain energy release rate normalized with the contact compliance (or the elastoadhesive length normalized with the contact radius)[57], where the effective modulus is \(E_{\text{eff}}=3/4C_{sys}a\), and \(a\) is the contact radius. This quantity represents the ability of a material to resist crack propagation through elasticity. Changes in the relative importance between contact compliance and surface energy in the contact region, \(G_{0}a^{2}\), **Fig. 4A**, do not delineate stable from unstable interfaces. In other words, the deformation of the probe is not dominated by an increased contribution from the surface energy as the probe modulus decreases.
Here, debonding occurs between a soft probe and a viscoelastic adhesive. At any given time the measured force is due to surface, viscoelastic, and elastic (probe deformation) contributions. The elastic (probe deformation) and viscous ("flow" of the adhesive) forces are highly coupled. Elastohydrodynamic deformation occurs when the viscous forces in a fluid are strong enough to cause elastic deformation to an opposing surfaces.[58; 59; 60; 61] We hypothesize that the probe deformation alters the pressure distribution within the adhesive film leading to a suppression of Saffman-Taylor instabilities. The relative importance of elastohydrodynamic deformation can be estimated through an elasticity parameter, \(\epsilon\) (**Eqn. (3)**), obtained from non-dimensionalization of the lubrication equation: [58; 59; 62]
\[\epsilon=\frac{\eta^{*}vR^{1.5}}{E_{\text{probe}}^{*}b^{2.5}}. \tag{3}\]
The elasticity parameter can be viewed as a ratio between elastic forces within the probe and viscous forces within the adhesive film. As \(\epsilon\) increases the elastic deformation
Figure 3: Side and bottom view images during debonding over time. Rigid probe (A) bottom and (B) side views. Soft probe, \(E_{\text{probe}}\) = 0.18 MPa (C) bottom and (D) side views. Instabilities are present during debonding from the rigid probe. Side view images (B) show stretching of adhesive. For the soft probe (C) the interface is stable, side views (D) show probe deformation. Note the different scale and magnification between the side and bottom views; the arrows represent the same dimension.
of the probe (\(w\)) increases,. For low \(\epsilon\) viscous forces do not cause probe deformation. We previously found that the dimensionless central deformation (\(\tilde{w}=w/b\)) of a spherical probe scales with \((6\epsilon)^{0.4}\).[63]
A plot of \(\epsilon\) as a function of debonding velocity (\(v\)) shows a clear demarcation between stable and unstable interfaces (**Fig. 4B**). The transition to a stable interface occurs across different materials systems and experimental parameters: probe modulus, radius, detachment velocity, and film thickness. The transition between an unstable and stable interface occurs around \(\epsilon=1\), when the elastic forces in the probe begin to dominate over the viscous forces in the adhesive film. The transition to a stable interface as the elasticity parameter increases supports the hypothesis that elastohydrodynamic deformation of the probes suppress the fingering instabilities.
As the velocity increases the adhesive strength increases, and stabilization of the interface shifts to higher \(\epsilon\) (**Fig. 4B**). We compare the role of debonding velocity on the probe deformation and the pressure within the film. An increase in \(v\) will increase the pressure within the adhesive film, which has a destabilizing tendency for the interface. However, increasing the velocity also increases the probe deformation, which we hypothesize stabilizes the interface. Non-dimensionalization of the lubrication equation leads to a characteristic pressure in the fluid, \(p^{*}=\eta vR/b^{2}\).[63] For a viscoelastic film, \(p^{*}=\eta^{*}vR/b^{2}\), therefore \(p^{*}\propto v^{(1-m)}\) with the dependence of the complex viscosity on velocity. For our material here \(m\)=0.725, giving \(p^{*}\propto v^{0.28}\). Moreover, the dimensionless central probe deformation scales as \(\hat{w}\sim v^{0.4(1-m)}\), and specifically for our material system \(\hat{w}\sim v^{0.11}\). Therefore, as \(v\) increases the pressure within the film (\(p^{*}\sim v^{0.28}\)) increases faster than the deformation of the probe (\(\hat{w}\sim v^{0.11}\)). The faster increase in pressure within the film as \(v\) increases would necessitate larger probe deformations to stabilize the interface, thus a higher elasticity parameters is needed for stabilization.
We study the relationship between elastohydrodynamic deformation and adhesive film pressure by modeling debonding between a soft probe and a rigid surface submerged in a Newtonian fluid.[31] In the model the
Figure 4: (A) Elastoadhesive length normalized by the contact radius vs effective surface energy for all probes. (B) Elasticity parameter (\(\epsilon\)) vs debonding velocity (\(v\)). The transition from unstable to stable interface is observed around \(\epsilon=1\) (black dotted line). Data includes adhesive with \(b=25\), \(b=50\), \(b=100\)\(\mu\)m and \(R\) between 4.5 mm – 14 mm and shows unstable interface (blue) and stable interface (pink).
Figure 5: Dimensionless pressure (\(p^{*}=\eta vR/b^{2}\)) versus dimensionless radial position (\(R_{H}=\sqrt{2Rb}\)) obtained from modeling the detachment of soft (\(E_{\text{probe}}\)=0.32 MPa) and stiff (\(E_{\text{probe}}\)=3 MPa) PDMS probes of R = 6 mm at 50 \(\mu\)m/s and \(\eta\)=1000 Pa.s, b=20 \(\mu\)m. Retraction of the soft probe leads to lower fluid pressure and appearance of stagnation point delineating drainage and infusion regions.
fluid viscosity is comparable to the complex viscosity of the adhesive. This model is a highly simplified version of our experiments, in that the adhesive is treated as a viscous fluid without an air-adhesive interface present. We extract the pressure profile during detachment for both rigid and soft probes and obtain lower fluid pressure with the soft probe, **Fig. 5**, which would have a stabilizing effect. We also observe that the elastohydrodynamic probe deformation leads to a non-monotonous pressure drop within the fluid. In contrast, the pressure distribution is monotonic during the detachment from a rigid probe. Moreover, deformation of the soft probe leads to a negative pressure gradient at the center point, causing the fluid _drainage_ from the center during detachment, while further away from the center the pressure drop is positive leading to the expected fluid _infusion_. Between the drainage and infusion regions there is a stagnation point where the pressure gradient is zero. The stagnation point moves towards the center of the probe during retraction (**Fig. 5**). Because of incompressibility, the surfaces initially move closer at the center point during detachment. The combination of lower pressure and a stagnation point could suppress the Saffman-Taylor instabilities during the detachment from a soft probe, and will be the subject of future studies.
In summary, the detachment of a viscoelastic adhesive from soft surfaces suppresses the onset of Saffman-Taylor instabilities. While elasticity has been shown previously to impact Saffman-Taylor instabilties, we show here the connection with adhesion. Controlling the mode of failure during debonding between soft materials and could impact adhesion (and pain) with skin. We attribute stabilization of the interface to elastohydrodynamic deformation of the probe caused by viscoelasticity. The elasticity parameter can serve as a guide for interfacial stability. A simple model shows that replacing a rigid probe with a soft one leads to a decrease in the pressure drop and the appearance of a stagnation point within the film, both could lead to interface stabilization. Further studies are necessary to better understand the detachment process between two soft materials and the stabilization of the interface.
_Acknowledgements:_ This work was supported by 3M and by the National Science Foundation (NSF-CMMI 1728082). Y.W. also acknowledges support from the National Natural Science Foundation of China (Grant No. 51804319.)
|
2309.06437 | Non-constant ground configurations in the disordered ferromagnet | The disordered ferromagnet is a disordered version of the ferromagnetic Ising
model in which the coupling constants are non-negative quenched random. A
ground configuration is an infinite-volume configuration whose energy cannot be
reduced by finite modifications. It is a long-standing challenge to ascertain
whether the disordered ferromagnet on the $\mathbb{Z}^D$ lattice admits
non-constant ground configurations. We answer this affirmatively in dimensions
$D\ge 4$, when the coupling constants are sampled independently from a
sufficiently concentrated distribution. The obtained ground configurations are
further shown to be translation-covariant with respect to $\mathbb{Z}^{D-1}$
translations of the disorder.
Our result is proved by showing that the finite-volume interface formed by
Dobrushin boundary conditions is localized, and converges to an infinite-volume
interface. This may be expressed in purely combinatorial terms, as a result on
the fluctuations of certain minimal cutsets in the lattice $\mathbb{Z}^D$
endowed with independent edge capacities. | Michal Bassan, Shoni Gilboa, Ron Peled | 2023-09-12T17:56:08Z | http://arxiv.org/abs/2309.06437v2 | # Non-constant ground configurations in the disordered ferromagnet
###### Abstract.
The disordered ferromagnet is a disordered version of the ferromagnetic Ising model in which the coupling constants are non-negative quenched random. A ground configuration is an infinite-volume configuration whose energy cannot be reduced by finite modifications. It is a long-standing challenge to ascertain whether the disordered ferromagnet on the \(\mathbb{Z}^{D}\) lattice admits non-constant ground configurations. We answer this affirmatively in dimensions \(D\geq 4\), when the coupling constants are sampled independently from a sufficiently concentrated distribution. The obtained ground configurations are further shown to be translation-covariant with respect to \(\mathbb{Z}^{D-1}\) translations of the disorder.
Our result is proved by showing that the finite-volume interface formed by Dobrushin boundary conditions is localized, and converges to an infinite-volume interface. This may be expressed in purely combinatorial terms, as a result on the fluctuations of certain minimal cutsets in the lattice \(\mathbb{Z}^{D}\) endowed with independent edge capacities.
## 1. Introduction
### Disordered ferromagnet
The Ising model is among the most basic models of statistical physics. On the hypercubic lattice \(\mathbb{Z}^{D}\), it is described by the formal Hamiltonian
\[H^{\text{Ising}}(\sigma):=-\sum_{\{x,y\}\in E(\mathbb{Z}^{D})}\sigma_{x}\sigma _{y} \tag{1.1}\]
on spin _configurations_\(\sigma\colon\mathbb{Z}^{D}\to\{-1,1\}\), where we write \(E(\mathbb{Z}^{D})\) for the edge set of \(\mathbb{Z}^{D}\).
In this paper we study _the disordered ferromagnet_ (or _ferromagnetic random-bond Ising model_), a version of the Ising model described by the formal Hamiltonian
\[H^{\eta}(\sigma):=-\sum_{\{x,y\}\in E(\mathbb{Z}^{D})}\eta_{\{x,y\}}\sigma_{x }\sigma_{y}, \tag{1.2}\]
in which the coupling field \(\eta=(\eta_{e})_{e\in E(\mathbb{Z}^{D})}\) is a _non-negative_ quenched random field. We refer to \(\eta\) as the (quenched) _disorder_ and restrict throughout to the case that it is an _independent_ field. We first consider the _isotropic_ case, in which each \(\eta_{e}\) is sampled independently from the same probability distribution \(\nu\), termed _the disorder distribution_.
### Ground configurations
Our primary interest is in the set of _ground configrations_, or zero-temperature configurations, of the disordered ferromagnet1. Precisely, a configuration \(\sigma\) is said to be a ground configuration for the coupling field \(\eta\) if it holds that \(H^{\eta}(\sigma)\leq H^{\eta}(\sigma^{\prime})\) for every configuration \(\sigma^{\prime}\) which differs from \(\sigma\) in _finitely_ many places. To make sense of the definition note that although \(H^{\eta}(\sigma)\) is ill-defined, as a non-convergent infinite sum, the difference \(H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})\) is well defined whenever \(\sigma\) and \(\sigma^{\prime}\) differ in finitely many places.
As the coupling constants are non-negative, it is clear that the constant configurations, \(\sigma\equiv+1\) and \(\sigma\equiv-1\), are ground configurations of the disordered ferromagnet. We study the following basic challenge:
**Question 1.1**.: _Does the disordered ferromagnet admit non-constant ground configurations?_
Ergodicity implies that existence of non-constant ground configurations is an event of probability \(0\) or \(1\) for each dimension and disorder distribution \(\nu\). The case of one dimension (\(D=1\)) is simple: non-constant ground configurations exist if and only if \(\nu\) has an atom at the bottom of its support. However, the question has remained open in all higher dimensions. Much attention has been given to the two-dimensional case (\(D=2\)), where it is shown that existence of non-constant ground configurations is equivalent to the existence of infinite bigeodesics in a dual first-passage percolation model (see, e.g., [20, Page 8]). Such bigeodesics are believed not to exist under mild assumptions on the disorder distribution, whence the answer to Question 1.1 is expected to be negative in \(D=2\). However, so far this is only proved under assumptions on the model which are still unverified [1], or for other, related, models possessing a special integrable structure [1, 19, 20, 18]. The higher-dimensional case, and the closely-related question of localization of domain walls of the disordered ferromagnet (see Question 1.6 below), were studied in the physics and mathematics literature by several authors, including Huse-Henley [14], Bovier-Frohlich-Glaus [11], Fisher [15], Bovier-Picco [16], Bovier-Kulske [1, 17], Wehr [20] and Wehr-Wasielak [20]. These studies predict, from non-rigorous methods [14, 15, 16] and from rigorous studies of simplified interface models [16, 17] (see also Section 1.1), an affirmative answer to Question 1.1 in dimensions \(D\geq 4\) for sufficiently concentrated disorder distributions \(\nu\) (see Section 9 for a discussion of other settings). In this work we confirm this prediction.
**Existence result.** To state our results, we need to give a precise meaning to the idea that the disorder distribution \(\nu\) is sufficiently concentrated. To this end, we define a notion of "width" of the distribution, applicable to either of the following two classes of disorder distributions:
1. Compact support: If the distribution \(\nu\) has compact support, we set \[\operatorname{diam}(\nu):=\min\{b-a:\,b\geq a\text{ and }\nu([a,b])=1\}\] (1.3) and otherwise we set \(\operatorname{diam}(\nu):=\infty\).
2. Lipschitz image of a Gaussian: If there exists a Lipschitz continuous \(f:\mathbb{R}\to[0,\infty)\) such that \(\nu=f(N(0,1))\) (i.e., \(\nu\) is the push-forward through \(f\) of the standard normal distribution) then set \[\operatorname{Lip}(\nu):=\inf\{\operatorname{Lip}(f):\,f:\mathbb{R}\to[0, \infty)\text{ is Lipschitz and }\nu=f(N(0,1))\},\] (1.4) with \(\operatorname{Lip}(f):=\sup_{t\neq s}\frac{|f(t)-f(s)|}{|t-s|}\) being the Lipschitz constant of \(f\). Otherwise, set \(\operatorname{Lip}(\nu):=\infty\).
We then define the "width" of the disorder distribution \(\nu\) by
\[\operatorname{wid}(\nu):=\min\{\operatorname{diam}(\nu),\operatorname{Lip}( \nu)\}. \tag{1.5}\]
We further restrict attention to disorder distributions whose support is bounded away from zero and to this end we denote by
\[\min(\operatorname{supp}(\nu)):=\max\{\alpha\colon\nu([\alpha,\infty))=1\} \tag{1.6}\]
the smallest point of the support. Lastly, to avoid issues with uniqueness of ground configurations in _finite volume_, we assume that \(\nu\) has no atoms.
The following is our main result.
**Theorem 1.2**.: _There exists \(c>0\) such that the following holds in dimensions \(D\geq 4\). Consider the (isotropic) disordered ferromagnet with disorder distribution \(\nu\). If \(\min(\operatorname{supp}(\nu))>0\), \(\nu\) has no atoms and_
\[\frac{\operatorname{wid}(\nu)}{\min(\operatorname{supp}(\nu))}\leq c\frac{ \sqrt{D}}{\log D} \tag{1.7}\]
_then the disordered ferromagnet admits non-constant ground configurations._
We make two remarks regarding the theorem:
1. Condition (1.7) is our notion of \(\nu\) being sufficiently concentrated. It is invariant to dilations of \(\nu\), as it should be since multiplying all coupling constants \((\eta_{e})\) in the Hamiltonian (1.2) by a positive constant does not change the set of ground configurations of the disordered ferromagnet (or its Gibbs states). The condition becomes easier to satisfy as the dimension \(D\) increases. In particular, the theorem shows that for any non-atomic disorder distribution \(\nu\) satisfying \(\min(\operatorname{supp}(\nu))>0\) and \(\operatorname{wid}(\nu)<\infty\) there exists \(D_{0}(\nu)\geq 4\) such that the disordered ferromagnet with disorder distribution \(\nu\) admits non-constant ground configurations in all dimensions \(D\geq D_{0}(\nu)\). Condition (1.7) also becomes easier to satisfy if a positive constant is added to \(\nu\). More precisely, we see that for any non-atomic distribution \(\mu\) supported in \([0,\infty)\) and satisfying \(\operatorname{wid}(\mu)<\infty\) and any \(D_{0}\geq 4\) there exists \(C_{0}(\mu,D_{0})\geq 0\) such that the following holds: for all \(C\geq C_{0}(\mu,D_{0})\), the disordered ferromagnet with disorder distribution \(\nu=\mu+C\) admits non-constant ground configurations in all dimensions \(D\geq D_{0}\) (where \(\mu+C\) is the push-forward of \(\mu\) by the translation \(x\mapsto x+C\)). As an example, there exists \(C>0\) such that the disordered ferromagnet whose disorder distribution is uniform on the interval \([C,C+1]\) admits non-constant ground configurations in all dimensions \(D\geq 4\).
2. For \(\operatorname{wid}(\nu)\) to be finite, the disorder distribution needs to be either of compact support or a Lipschitz image of a Gaussian. The latter possibility allows some distributions of unbounded support such as the positive part of a Gaussian (plus a constant, so that \(\min(\operatorname{supp}(\nu))>0\)), and can also lead to a smaller value of \(\operatorname{wid}(\nu)\) for some distributions of compact support.
**Covariant ground configurations.** Once the _existence_ of non-constant ground configurations has been established, it is natural to turn to more refined properties. In the non-disordered setup, a key role is played by the notion of _invariant_ Gibbs states (e.g., translation-invariant Gibbs states). The corresponding notion in the disordered setup is that of _covariant_ states (going back at least to the pioneering work of Aizenman-Wehr [1], who further introduced covariant _metastates_). We apply this notion to ground configurations as follows: Let \(G\) be a group of automorphisms of the lattice \(\mathbb{Z}^{D}\) (each \(g\in G\) is composed of translations, rotations and reflections). We naturally define
\[g(\eta)_{\{x,y\}}:=\eta_{g\{x,y\}}=\eta_{\{gx,gy\}}\quad\text{and}\quad g( \sigma)_{x}:=\sigma_{gx} \tag{1.8}\]
for coupling fields \(\eta\) and configurations \(\sigma\). Let \(\mathcal{C}\subset[0,\infty)^{E(\mathbb{Z}^{D})}\) be a measurable set of coupling fields which is \(G\)-invariant in the sense that \(g(\eta)\in\mathcal{C}\) for each automorphism \(g\in G\)
and coupling field \(\eta\in\mathcal{C}\). A \(G\)_-covariant ground configuration_ defined on \(\mathcal{C}\) is a _measurable_ function \(T:\mathcal{C}\to\{-1,1\}^{\mathbb{Z}^{D}}\) which satisfies the following properties for all \(\eta\in\mathcal{C}\):
1. \(T(\eta)\) is a ground configuration for the disordered ferromagnet with coupling field \(\eta\).
2. \(T(g(\eta))=g(T(\eta))\) for each automorphism \(g\in G\).
If, moreover, \(T(\eta)\) is non-constant for all \(\eta\in\mathcal{C}\) we say that \(T\) is a _non-constant_\(G\)-covariant ground configuration defined on \(\mathcal{C}\).
When a disorder distribution \(\nu\) has been specified (in the isotropic setup), we may refer to a \(G\)-covariant ground configuration without reference to its domain \(\mathcal{C}\). It is then understood that the \((\eta_{e})\) are sampled independently from \(\nu\) and that the \(G\)-covariant ground configuration is defined on some \(G\)-invariant \(\mathcal{C}\) satisfying that \(\mathbb{P}(\eta\in\mathcal{C})=1\). The analogous comment applies to the anistropic setup discussed below, when the two disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) have been specified.
Wehr-Wasielak [20] prove, in all dimensions \(D\geq 1\), that when the disorder distribution \(\nu\) is non-atomic and has finite mean (or, more generally, sufficiently light tail) then there are no non-constant \(\mathbb{Z}^{D}\)-translation-covariant ground configurations (\(\mathbb{Z}^{D}\)-translation-covariant means that the group \(G\) is the translation group of \(\mathbb{Z}^{D}\)). More generally, they prove that \(\mathbb{Z}^{D}\)-translation-covariant ground metastates (i.e., \(G\)-covariant _probability distributions_ on ground configurations) must be supported on the constant configurations.
Our next result shows that non-constant \(G\)-covariant ground configurations may exist already when \(G\) is one rank lower than the full translation group of \(\mathbb{Z}^{D}\).
**Theorem 1.3**.: _Let \(G^{D-1}\) be the group of automorphisms of \(\mathbb{Z}^{D}\) which preserve the last coordinate, i.e.,_
\[G^{D-1}:=\{g\text{ automorphism of }\mathbb{Z}^{D}\colon(gx)_{D}=x_{D}\text{ for all }x=(x_{1},\ldots,x_{D})\}. \tag{1.9}\]
_Under the assumptions of Theorem 1.2 (for a sufficiently small \(c>0\)), there exists a non-constant \(G^{D-1}\)-covariant ground configuration._
### The anisotropic disordered ferromagnet
Our proof naturally extends to an _anisotropic_ setup, in which there is a distinguished lattice axis and the coupling constants of edges in the direction of that axis are sampled from a different disorder distribution. We next describe this setup (distinguishing the \(D\)th axis).
It is sometimes convenient to identify edges of \(\mathbb{Z}^{D}\) with their dual plaquettes, i.e., to identify \(\{x,y\}\in E(\mathbb{Z}^{D})\) with the \((D-1)\)-dimensional plaquette separating the unit cubes centered at \(x\) and \(y\). With this identification in mind we partition the plaquettes into those which are parallel to the hyperplane spanned by the first \(D-1\) coordinate axes and those which are perpendicular to it. Thus we define
\[E^{\parallel}(\mathbb{Z}^{D}) :=\{\{x,y\}\in E(\mathbb{Z}^{D})\colon x-y\in\{-e_{D},e_{D}\}\}, \tag{1.10}\] \[E^{\perp}(\mathbb{Z}^{D}) :=\{\{x,y\}\in E(\mathbb{Z}^{D})\colon x-y\notin\{-e_{D},e_{D}\}\},\]
where \(e_{1},e_{2},\ldots,e_{D}\) are the standard unit vectors in \(\mathbb{Z}^{D}\).
By the _anisotropic disordered ferromagnet_ we mean that the disorder \(\eta\) is sampled independently from two disorder distributions, \(\nu^{\parallel}\) and \(\nu^{\perp}\). Precisely, \((\eta_{e})_{e\in E(\mathbb{Z}^{D})}\) are independent, with \(\eta_{e}\) sampled from \(\nu^{\parallel}\) when \(e\in E^{\parallel}(\mathbb{Z}^{D})\), and sampled from \(\nu^{\perp}\) when \(e\in E^{\perp}(\mathbb{Z}^{D})\). The isotropic setup is recovered when \(\nu^{\parallel}=\nu^{\perp}\).
Our standard assumptions on the disorder distributions are
\[\begin{split}&\min(\operatorname{supp}(\nu^{\parallel}))>0,\quad \operatorname{wid}(\nu^{\parallel})<\infty\quad\text{and $\nu^{\parallel}$ has no atoms},\\ &\min(\operatorname{supp}(\nu^{\perp}))>0,\quad\operatorname{wid} (\nu^{\perp})<\infty.\end{split} \tag{1.11}\]
We do not assume that \(\nu^{\perp}\) has no atoms (and, in fact, the case that \(\nu^{\perp}\) is supported on a single point is of interest as it leads to the disordered Solid-On-Solid model; see Remark 1.12).
In the anisotropic setup, condition (1.7) of Theorem 1.2 is replaced by condition (1.14) below, which is based on the following quantity:
\[\kappa(\nu^{\parallel},\nu^{\perp},d):=\left(\frac{1}{\underline{\alpha}^{ \parallel}\underline{\alpha}^{\perp}}+\frac{1}{d(\underline{\alpha}^{\perp}) ^{2}}\right)\operatorname{wid}(\nu^{\parallel})^{2}+\frac{1}{(\underline{ \alpha}^{\perp})^{2}}\operatorname{wid}(\nu^{\perp})^{2} \tag{1.12}\]
where, for brevity, we denote
\[\underline{\alpha}^{\parallel}:=\min(\operatorname{supp}(\nu^{\parallel})) \quad\text{and}\quad\underline{\alpha}^{\perp}:=\min(\operatorname{supp}(\nu^ {\perp})). \tag{1.13}\]
**Theorem 1.4**.: _There exists \(c_{0}>0\) such that the following holds in dimensions \(D\geq 4\). In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11) and_
\[\kappa(\nu^{\parallel},\nu^{\perp},D-1)\left(1+\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\parallel}}\right)\leq c_{0}\frac{D}{(\log D)^{2}}. \tag{1.14}\]
_Then the anisotropic disordered ferromagnet admits non-constant ground configurations. Moreover, there exists a non-constant \(G^{D-1}\)-covariant ground configuration, where \(G^{D-1}\) is given by (1.9)._
Theorem 1.2 and Theorem 1.3 arise as the special case of Theorem 1.4 in which \(\nu^{\parallel}=\nu^{\perp}\). We thus focus in the sequel on the anisotropic setup.
Similarly to condition (1.7), we make the following remark regarding condition (1.14). For any pair of disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfying \(\min(\operatorname{supp}(\nu^{\parallel}))>0\), \(\min(\operatorname{supp}(\nu^{\perp}))>0\), \(\operatorname{wid}(\nu^{\parallel})<\infty\) and \(\operatorname{wid}(\nu^{\perp})<\infty\), condition (1.14) will be satisfied in either sufficiently high dimensions \(D\), or in any fixed dimension \(D\geq 4\) provided \(\nu^{\perp}\) and \(\nu^{\parallel}\) are replaced by \(\nu^{\perp}+C\) and \(\nu^{\parallel}+C\), respectively, for a sufficiently large \(C\).
**Dobrushin boundary conditions.** Our proof of Theorem 1.4 proceeds through an explicit construction of a non-constant ground configuration. Specifically, we will show that the infinite-volume limit of the model with _Dobrushin boundary conditions_ leads to such a ground configuration. In this section we explain the relevant result, following required notation.
We first generalize the notion of ground configuration to allow boundary conditions. Let \(\Delta\subset\mathbb{Z}^{D}\) and \(\rho\colon\mathbb{Z}^{D}\to\{-1,1\}\). The configuration space in the domain \(\Delta\) with boundary conditions \(\rho\) is
\[\Omega^{\Delta,\rho}:=\{\sigma\colon\mathbb{Z}^{D}\to\{-1,1\}\colon\sigma_{x} =\rho_{x}\text{ for }x\notin\Delta\}. \tag{1.15}\]
A _ground configuration in \(\Omega^{\Delta,\rho}\)_ (for the coupling field \(\eta\)) is any \(\sigma\in\Omega^{\Delta,\rho}\) with the property that \(H^{\eta}(\sigma)\leq H^{\eta}(\sigma^{\prime})\) for every configuration \(\sigma^{\prime}\in\Omega^{\Delta,\rho}\) which differs from \(\sigma\) in finitely many places. (noting, again, that the difference \(H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})\) is then well defined). It is possible for multiple ground configurations to exist even when \(\Delta\) is finite, though this will not be the case when \(\eta\) is generic in a suitable sense (see Section 4.1.1).
We proceed to discuss ground configurations with Dobrushin boundary conditions. Assume henceforth that the dimension \(D\geq 2\) and introduce the convenient notation
\[d:=D-1. \tag{1.16}\]
We often tacitly use the convention to denote vertices of \(\mathbb{Z}^{D}\) by a pair \((v,k)\) with \(v\in\mathbb{Z}^{d}\) and \(k\in\mathbb{Z}\). By Dobrushin boundary conditions we mean \(\rho^{\mathrm{Dob}}\colon\mathbb{Z}^{D}\to\{-1,1\}\) given by
\[\rho^{\mathrm{Dob}}_{(v,k)}:=\mathrm{sign}(k-1/2) \tag{1.17}\]
where sign denotes the sign function.
The following simple lemma shows that there is a unique ground configuration with Dobrushin boundary conditions on infinite cylinders of the form \(\Lambda\times\mathbb{Z}\) for \(\Lambda\subset\mathbb{Z}^{d}\) finite, under suitable conditions on the disorder distributions.
**Lemma 1.5**.: _In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11). For each finite \(\Lambda\subset\mathbb{Z}^{d}\) there exists almost surely a unique ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\), that we denote \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\). Moreover, \(\sigma_{x}^{\eta,\Lambda,\mathrm{Dob}}=\rho^{\mathrm{Dob}}_{x}\) for all but finitely many \(x\in\mathbb{Z}^{D}\)._
The ground configuration \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) necessarily contains an interface separating the \(+1\) boundary values imposed at the "top" of the cylinder \(\Lambda\times\mathbb{Z}\) from the \(-1\) boundary values imposed at the "bottom" of the cylinder. To derive the existence of non-constant ground configurations in the whole of \(\mathbb{Z}^{D}\) from these semi-infinite-volume ground configurations, the fundamental issue to be understood is whether this interface remains localized in the sense of the following question.
**Question 1.6**.: _Is the interface under Dobrushin boundary conditions localized, uniformly in the finite volume \(\Lambda\)? More precisely, is it the case that for each \(\varepsilon>0\) there exists a finite \(\Delta\subset\mathbb{Z}^{D}\) such that_
\[\mathbb{P}(\sigma^{\eta,\Lambda,\mathrm{Dob}}\text{ is constant on }\Delta)< \varepsilon\quad\text{for all finite }\Lambda\subset\mathbb{Z}^{d}. \tag{1.18}\]
For the set \(\Delta\) in the question, one may have in mind, e.g., the set \(\{(0,\dots,0,j)\colon-k\leq j\leq k\}\) with \(k\) large.
A positive answer to Question 1.6 implies a positive answer to Question 1.1. Indeed, by compactness, the distribution of the pair \((\eta,\sigma^{\eta,\Lambda,\mathrm{Dob}})\) admits sub-sequential limits along any sequence of finite volumes \((\Lambda_{n})\) increasing to \(\mathbb{Z}^{d}\). Any such limiting distribution is supported on pairs \((\eta^{\prime},\sigma)\) with \(\eta^{\prime}\) having the distribution of \(\eta\) and \(\sigma\) an infinite-volume ground configuration for \(\eta^{\prime}\). A positive answer to the question ensures that \(\sigma\) is almost surely non-constant.
The answer to Question 1.6 is known to be negative for \(D=2\) in the isotropic setup under mild assumptions on the disorder distribution \(\nu\); this is essentially the Benjamini-Kalai-Schramm midpoint problem [1], resolved conditionally by Damron-Hanson [10], unconditionally by Ahlberg-Hoffman [1] and quantitatively by Dembin, Elboim and the third author [1]. The following theorem, our main result on Dobrushin interfaces, proves that the answer is positive in dimensions \(D\geq 4\) for sufficiently concentrated disorder distributions. Further discussion on Question 1.6, including the possibility of a roughening transition in the disorder concentration, is in Section 9.
**Theorem 1.7** (Localization of Dobrushin interface).: _There exist \(c_{0},c>0\) such that the following holds in dimensions \(D\geq 4\). In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11) and condition (1.14) holds (with the constant \(c_{0}\)). Then for all finite \(\Lambda\subset\mathbb{Z}^{d}\) and all \((v,k)\in\mathbb{Z}^{D}\),_
\[\mathbb{P}\left(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{ Dob}}_{(v,k)}\right)\leq\exp\left(-\frac{c}{d^{2}\kappa}|k|^{\frac{d-2}{d-1}}\right) \tag{1.19}\]
_with \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) defined in (1.12). Moreover, for small \(k\) we have an improved dependence on dimension: if \(|k|<2^{d}\) then_
\[\mathbb{P}\left(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{Dob }}_{(v,k)}\right)\leq\exp\left(-\frac{c}{\kappa}|k|\right). \tag{1.20}\]
We add that the theorem resolves a version of [10, Open question 1] (see Section 1.1.5). We also remark that the power of \(|k|\) in (1.19) is probably not optimal and may be increased with further optimization of the proof.
**Remark 1.8** (Combinatorial interpretation).: _Endow the edges of \(\mathbb{Z}^{D}\) with independent, non-negative weights from a distribution \(\nu\). Let \(\Lambda\subset\mathbb{Z}^{D-1}\) finite. We study the minimal (edge) cutset in \(\Lambda\times\mathbb{Z}\) separating the parts of the boundary of \(\Lambda\times\mathbb{Z}\) above and below the plane \(\Lambda\times\{0\}\). More precisely, writing_
\[\begin{split} B^{+}&:=\{(v,k)\in\Lambda^{c}\times \mathbb{Z}\colon k\geq 0\},\\ B^{-}&:=\{(v,k)\in\Lambda^{c}\times\mathbb{Z} \colon k<0\},\end{split} \tag{1.21}\]
_we study the minimal cutset separating \(B^{+}\) and \(B^{-}\) (the cutset may only differ from the flat plane above \(\Lambda\)). Our result is proved in dimensions \(D\geq 4\) when \(\nu\) satisfies (1.7) with a small \(c>0\). It shows that the minimal cutset is localized close to the plane \(\Lambda\times\{0\}\), in the sense that for any \(v\in\Lambda\) the probability that the cutset contains an edge incident to \((v,k)\) decays as a stretched exponential in \(|k|\). This holds uniformly in the finite set \(\Lambda\)._
_More generally, the edge weights may be sampled independently from distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) as explained after (1.10), and the result is proved under (1.11) and under condition (1.14) with a small \(c_{0}>0\)._
As explained above, Theorem 1.7 already suffices to deduce the existence of non-constant ground configurations. To go further and prove the existence of a non-constant _covariant_ ground configuration we employ the next result, which proves the almost sure convergence of the semi-infinite-volume ground configurations with Dobrushin boundary conditions to an infinite-volume limit.
**Theorem 1.9** (Convergence).: _Under the assumptions of Theorem 1.7 (for a sufficiently small \(c_{0}>0\)) there exists a configuration \(\sigma^{\eta,\mathrm{Dob}}\) such that for every fixed sequence \((\Lambda_{n})\) of finite subsets of \(\mathbb{Z}^{d}\) satisfying that \(\Lambda_{n}\supset\{-n,-n+1,\ldots,n\}^{d}\) for each \(n\), almost surely,_
_for each \(v\in\mathbb{Z}^{d}\) there exists \(n_{0}\) such that \(\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\equiv\sigma^ {\eta,\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) for all \(n\geq n_{0}.\)_
(1.22)
The following is deduced from Theorem 1.7 and Theorem 1.9.
**Corollary 1.10**.: _There exists \(c>0\) such that the following holds under the assumptions of Theorem 1.7 (for a sufficiently small \(c_{0}>0\)). The configuration \(\sigma^{\eta,\mathrm{Dob}}\) of Theorem 1.9 (possibly modified on a set of zero probability) is a non-constant \(G^{D-1}\)-covariant ground configuration, where \(G^{D-1}\) is given by (1.9). In addition, for all \((v,k)\in\mathbb{Z}^{D}\),_
\[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{Dob}}_{(v,k)}\right)\leq\exp\left(-\frac{c}{d^{2}\kappa}|k|^{\frac{d-2}{d-1}}\right) \tag{1.23}\]
_with \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) defined in (1.12). Moreover, for small \(k\) we have an improved dependence on dimension: if \(|k|<2^{d}\) then_
\[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}_{(v,k)}\neq\rho^{\mathrm{Dob}}_{(v,k)}\right)\leq\exp\left(-\frac{c}{\kappa}|k|\right). \tag{1.24}\]
Theorem 1.4 is an immediate consequence of the last corollary.
Our techniques further allow to quantify the rate of convergence in Theorem 1.9 and to bound the rate of correlation decay in the infinite-volume configuration \(\sigma^{\eta,\mathrm{Dob}}\). We record these in our final result (this result will not be needed elsewhere in the paper).
**Theorem 1.11**.: _There exist \(C,c>0\) such that the following holds under the assumptions of Theorem 1.7 (for a sufficiently small \(c_{0}>0\)). Let_
\[c(\nu^{\parallel},\nu^{\perp},d):=\frac{c}{\kappa d^{2}}\left(\min\left\{ \frac{\alpha^{\parallel}}{\alpha^{\perp}},1\right\}\right)^{\frac{d-2}{d-1}} \tag{1.25}\]
_using the notation (1.12) (with \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\)) and (1.13). Let also \(\Lambda(k):=\{-k,\ldots,k\}^{d}\) for integer \(k\geq 0\)._
1. Rate of convergence to infinite-volume limit_: Let_ \(L_{1}>L_{0}\geq 0\) _integer. Let_ \(\Lambda\subset\mathbb{Z}^{d}\) _be a finite subset containing_ \(\Lambda(L_{1})\)_. Then_ \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z}} \not\equiv\sigma^{\eta,\Lambda,\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z} }\right)\leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(L_{1}-L_{0} \right)^{\frac{d-2}{d}}\right).\] (1.26)
2. Correlation decay in infinite-volume limit_: Let_ \(u,v\in\mathbb{Z}^{d}\) _and_ \(L\geq 0\) _integer, and suppose_ \(\|u-v\|_{\infty}>2L\)_. Let_ \(f,g:\{-1,1\}^{\Lambda(L)\times\mathbb{Z}}\to[-1,1]\) _be measurable. Then_ \[\mathrm{Cov}(f(\sigma^{\eta,\mathrm{Dob}}|_{(u+\Lambda(L))\times \mathbb{Z}}),g(\sigma^{\eta,\mathrm{Dob}}|_{(v+\Lambda(L))\times\mathbb{Z}}))\\ \leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(\|u-v\|_{ \infty}-2L\right)^{\frac{d-2}{d}}\right)\] (1.27) _where_ \(\mathrm{Cov}(X,Y)\) _denotes the covariance of the random variables_ \(X,Y\)_._
3. Tail triviality in infinite-volume limit_: The process_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _is_ \(G^{d}\)_-invariant. Moreover, define the_ \(\mathbb{Z}^{d}\)_-tail sigma algebra of_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _as the intersection of the sigma algebras_ \((\mathcal{T}_{n})\)_, where_ \(\mathcal{T}_{n}\) _is generated by_ \(\sigma^{\eta,\mathrm{Dob}}|_{(\mathbb{Z}^{d}\setminus\Lambda(n))\times\mathbb{Z}}\) _and by_ \((\eta_{e})\) _for the edges_ \(e=\{(u,k),(v,\ell)\}\) _with_ \(\{u,v\}\cap(\mathbb{Z}^{d}\setminus\Lambda(n))\neq\emptyset\)_. Then the_ \(\mathbb{Z}^{d}\)_-tail sigma algebra of_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _is trivial. In particular,_ \((\eta,\sigma^{\eta,\mathrm{Dob}})\) _is ergodic with respect to the group of translations in the first_ \(d\) _coordinates._
Theorem 1.7 is proved in Section 4.3, Theorem 1.9, Corollary 1.10 and Theorem 1.11 are proved in Section 8 and Lemma 1.5 is proved in Appendix B. The next section discusses related works. An overview of our proof is provided in Section 1.2. Section 9 provides further discussion and a selection of open problems and conjectures.
### Background
#### 1.1.1. Localization predictions
The domain walls of the disordered Ising ferromagnet were studied by Huse-Henley [10], Bovier-Frohlich-Glaus [1] and Fisher [13] using methods which are not mathematically rigorous. They predicted that the interface with Dobrushin boundary conditions is rough in dimensions \(2\leq D\leq 3\), is localized in dimensions \(D\geq 5\), and, for sufficiently concentrated disorder, is also localized in dimension \(D=4\).
#### 1.1.2. Disordered Solid-On-Solid model
The following simplified model for the interface under Dobrushin boundary conditions is considered by [11, 12]: The interface is described by a (height) function \(\varphi:\mathbb{Z}^{d}\to\mathbb{Z}\), whose energy is given by the formal "disordered Solid-On-Solid (SOS)" Hamiltonian
\[H^{\text{SOS},\zeta}(\varphi):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}|\varphi_{u}- \varphi_{v}|+\sum_{v\in\mathbb{Z}^{d}}\zeta_{v,\varphi_{v}} \tag{1.28}\]
where \(\zeta:\mathbb{Z}^{d}\times\mathbb{Z}\to\mathbb{R}\) is an environment describing the quenched disorder. This model is obtained from the disordered ferromagnet with two approximations: (i) It is assumed that the interface in \(D=d+1\) dimensions has _no overhangs_, i.e., it may be described by a height function above a \(d\)-dimensional base, (ii) all the coupling constants corresponding to perpendicular plaquettes (i.e., all the \(\eta_{\{u,v\}}\) for \(\{u,v\}\in E^{\perp}(\mathbb{Z}^{D})\)) are set equal (with the normalization of (1.28) they are set equal to \(1/2\) while \(\zeta_{v,k}:=2\eta_{\{v,k\},\{v,k+1\}}\)).
**Remark 1.12**.: _In fact, as part of the analysis of this paper, we prove the (possibly surprising) fact that at zero temperature the no overhangs approximation (i) is actually a consequence of the equal perpendicular couplings approximation (ii) (see Lemma 3.1). Thus, our main results for the anistropic disordered ferromagnet cover also the disordered SOS model (1.28), as the special case in which the disorder distribution \(\nu^{\perp}\) is a delta measure at \(1/2\) (however, in this special case our proof may be greatly simplified due to the no overhangs property)._
A mathematically-rigorous study of the disordered SOS model (1.28) was carried out by Bovier-Kulke [1, 1], following an earlier analysis by Bovier-Picco [1] of a hierarchical version of the model (see also [1, 1]). It was shown in [1] that in each dimension \(d\geq 3\), at low temperature (including zero temperature), when the \((\zeta_{v},)_{v\in\mathbb{Z}^{d}}\) are independent and identically distributed, the sequence \(k\mapsto\zeta_{v,k}\) is stationary for each \(v\) (this is more general than being independent!) and the \(\zeta_{v,k}\) are sufficiently concentrated, the finite-volume Gibbs measures of the Hamiltonian (1.28) converge, on a non-random sequence of volumes, to a limiting infinite-volume Gibbs measure, \(\zeta\)-almost-surely. Some control of the fluctuations of the infinite-volume Gibbs measure, at least at zero temperature, is also provided [1, Proposition 3.6]. These results for the disordered SOS model (1.28) thus have the flavor of our Theorem 1.7 and Theorem 1.9, though they, on the one hand, apply also at low positive temperature and allow for more general disorder distributions and, on the other hand, do not quantify the dependence on the dimension \(d\) (i.e., their sufficient concentration requirement may become more stringent as \(d\) increases). The work [1] further discusses alternative assumptions on \(\zeta\) relevant to the interface in the _random-field_ Ising model (see also Section 9.2).
The behavior of the disordered SOS model (1.28) in the low dimensions \(d=1,2\) was studied in [1] (using a method of Aizenman-Wehr [1]), who proved a result showing a form of delocalization in these dimensions. Specifically, they prove that, at all finite non-zero temperatures, when the \((\zeta_{v,k})\) are independently sampled from a distribution with positive variance which either has no isolated atoms or has compact support, the model does not admit translation-covariant and coupling-covariant metastates. Here, a metastate is a measurable mapping from \(\zeta\) to probability distributions over (infinite-volume) Gibbs measures of the model, and the coupling covariance requirement is that, for each finite \(\Lambda\subset\mathbb{Z}^{d}\), the metastate changes in a natural way under modification of \((\zeta_{v,k})_{v\in\Lambda,k\in\mathbb{Z}}\).
#### 1.1.3. Long-range order in the random-field Ising model
The localization proof of [10] in dimensions \(d\geq 3\) is closely tied to earlier developments on the problem of long-range order in the random-field Ising model (see (1.29) below). Imry-Ma [17] predicted that at low temperatures and weak disorder in dimensions \(d\geq 3\), the random-field Ising model retains the ferromagnetic ordered phase of the pure Ising model (and that this does not occur when \(d=2\)). The prediction for \(d=3\) was initially challenged in the physics literature (e.g., [11]), but received support in works of Chalker [12] and Fisher-Frohlich-Spencer [13] and was finally confirmed in the breakthrough works of Imbrie [14, 15] and Bricmont-Kupiainen [10, 11]. The proof of [10] adapts the proof technique of [10]. Recently, a short proof of the existence of an ordered phase in the random-field Ising model was found by Ding-Zhuang [11]. In this paper, we use an adaptation of the Ding-Zhuang argument as one of the ingredients in our proof of localization of the Dobrushin interface in the disordered Ising ferromagnet (see Section 1.2 below).
#### 1.1.4. Law of large numbers and large deviations of the ground energy
In dimensions \(D>2\), following initial work by Kesten [14] (and [1, 1] in the case of \(\{0,1\}\) coupling constants), there has been significant advances in the understanding of the law of large numbers and large deviations of the ground energy of the disordered ferromagnet (or the maximal flow in a dual network) in various settings [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 333, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 378, 377, 38, 379, 38, 38, 390, 391, 392, 393, 394, 395, 396, 397, 398, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 42, 434, 44, 445, 45, 46, 47, 48, 49, 500, 410, 411, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 10, 11, 14, 17, 18, 19, 11, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 42, 49, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49
the study of the limit shape in first-passage percolation, by Basdevant-Gouere-Theret [1] (for \(\{0,1\}\) passage times) and by Dembin-Elboin-Peled [1, Theorem 1.5].
#### 1.1.7. Number of ground configurations
Wehr [20] proved that the number of ground configurations of the disordered ferromagnet is two or infinity. The result applies for coupling fields sampled independently from a non-atomic disorder distribution with finite mean. It thus follows from our main result, Theorem 1.2, that there are infinitely many ground configurations under the assumptions there (see also Section 9.3).
#### 1.1.8. Translation-covariant ground metastates
As previously mentioned, Wehr-Wasielak [20] proved that \(\mathbb{Z}^{D}\)-translation-covariant ground metastates must be supported on the constant configurations when the disorder distribution \(\nu\) is non-atomic and has finite mean (or, more generally, has sufficiently light tail). This result is applied in the discussion in Section 9.3.
#### 1.1.9. The Dobrushin interface in other settings
Our localization result, Theorem 1.7, extends the seminal work of Dobrushin [13] to the setting of the zero-temperature disordered ferromagnet. Dobrushin's result has previously been extended to various (non-disordered) settings, of which we mention the Widom-Rowlinson model [1, 2], lattice Gauge theories [1] (see also Section 9.5), the Falicov-Kimball model [14], percolation and the random-cluster model [1, 15] and in studying fine properties of the Dobrushin interface of the Ising model [1]. Alternative approaches for showing the existence of non-translation-invariant Gibbs states include the correlation inequality approach of van Beijeren [20] and the restricted-reflection-positivity approach of Shlosman-Vignaud [21]. These alternative approaches do not seem to be applicable in our disordered setting.
### Overview of the proof
In this section we overview the proof of the localization of the Dobrushin interface stated in Theorem 1.7.
The basic idea is to synthesize Dobrushin's approach [13] for proving the localization of the Dobrushin interface in the _pure_ (i.e., non-disordered) Ising model with the simple method for proving long-range order in the random-field Ising model presented by Ding-Zhuang [13]. As it turns out, difficulties arise in this synthesis which necessitate the development of additional tools.
#### 1.2.1. The random-field Ising model
The random-field Ising model (RFIM) is the model on \(\sigma:\mathbb{Z}^{d}\to\{-1,1\}\) given by the formal Hamiltonian
\[H^{\mathrm{RFIM},\zeta}(\sigma):=-\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}\sigma_{u }\sigma_{v}-\lambda\sum_{v}\zeta_{v}\sigma_{v} \tag{1.29}\]
where \((\zeta_{v})\) are independently sampled from the standard Gaussian distribution (more general distributions may be allowed) and \(\lambda>0\) denotes the random-field strength. Imbrie [17, 18] established long-range order in the RFIM in dimensions \(d\geq 3\) at zero temperature and small \(\lambda\) (and Bricmont-Kupiainen [1, 16, 15] proved the analogous fact at low, positive temperatures). It is instructive to begin our overview by describing Ding-Zhuang's [13] short approach to this (while [13] present their argument at low, positive temperatures, below we describe its zero-temperature version). Let \(\sigma^{\zeta,L}\) be the ground configuration of the RFIM in \(\{-L,\dots,L\}^{d}\) with \(+1\) boundary conditions. Let us show that it is unlikely that there exists some \(A\subset\mathbb{Z}^{d}\), connected with connected complement and containing the origin,
such that \(\sigma^{\zeta,L}\equiv-1\) (\(\sigma^{\zeta,L}\equiv+1\)) on the interior (exterior) vertex boundary of \(A\). Suppose \(A\) is such a set. Define a modified configuration and random field by
\[\sigma_{v}^{\zeta,L,A}:=\begin{cases}-\sigma_{v}^{\zeta,L}&v\in A \\ \sigma_{v}^{\zeta,L}&v\notin A\end{cases}, \tag{1.30}\] \[\zeta_{v}^{A}:=\begin{cases}-\zeta_{v}&v\in A\\ \zeta_{v}&v\notin A\end{cases}.\]
The discrete \(\pm 1\) symmetry of the RFIM then leads to the energy gap
\[H^{\mathrm{RFIM},\zeta}(\sigma^{\zeta,L})-H^{\mathrm{RFIM},\zeta^{A}}(\sigma^ {\zeta,L,A})\geq 2|\partial A| \tag{1.31}\]
where \(\partial A\) is the edge boundary of \(A\). This implies that also
\[\mathrm{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE}^{\mathrm{RFIM},\zeta^{A},L} \geq 2|\partial A| \tag{1.32}\]
where \(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}:=H^{\mathrm{RFIM},\zeta}(\sigma^{\zeta,L})\) denotes the energy of the ground configuration in the random field \(\zeta\). The argument will be (eventually) concluded by proving that, for each \(\ell\),
\[\mathbb{P}\left({}^{\exists A\subset\mathbb{Z}^{d}\text{ connected with connected complement, }0\in A\text{ and }|\partial A|=\ell,\atop|\operatorname{GE}^{\mathrm{ RFIM},\zeta,L}-\operatorname{GE}^{\mathrm{RFIM},\zeta^{A},L}|\geq 2|\partial A|}\right) \leq C_{d}\exp\left(-c_{d}\frac{\ell^{\frac{d-2}{d-1}}}{\lambda^{2}}\right) \tag{1.33}\]
(with \(C_{d},c_{d}>0\) depending only on \(d\)). To understand (1.33) better, let us first explain a version of it (see (1.36) below) for a fixed _deterministic_ set \(A\subset\mathbb{Z}^{d}\). First, observe that \(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}\) satisfies the conditional concentration inequality (see Theorem 2.1 below)
\[\mathbb{P}\left(\big{|}\operatorname{GE}^{\mathrm{RFIM},\zeta,L}-\mathbb{E}( \mathrm{GE}^{\mathrm{RFIM},\zeta,L}|\,\zeta|_{A^{c}})\big{|}\geq t\,|\,\zeta|_ {A^{c}}\right)\leq C\exp\left(-c\frac{t^{2}}{\lambda^{2}|A|}\right) \tag{1.34}\]
(with \(C,c>0\) absolute constants). Next, note that \(\zeta^{A}\) and \(\zeta\) have the same distribution, even conditioned on \(\zeta|_{A^{c}}\), whence the same is true for \(\mathrm{GE}^{\mathrm{RFIM},\zeta^{A},L}\) and \(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}\). Consequently, the _difference of ground energies_ satisfies the same concentration inequality (with different constants),
\[\mathbb{P}\left(\big{|}\operatorname{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE} ^{\mathrm{RFIM},\zeta^{A},L}\big{|}\geq t\right)\leq C\exp\left(-c\frac{t^{2} }{\lambda^{2}|A|}\right). \tag{1.35}\]
Thus, using the isoperimetric inequality \(|A|\leq C_{d}|\partial A|^{d/(d-1)}\),
\[\mathbb{P}(\mathrm{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE}^{\mathrm{RFIM}, \zeta^{A},L}\geq 2|\partial A|)\leq C\exp\left(-c\frac{|\partial A|^{2}}{ \lambda^{2}|A|}\right)\leq C\exp\left(-c_{d}\frac{|\partial A|^{\frac{d-2}{d- 1}}}{\lambda^{2}}\right). \tag{1.36}\]
Such an estimate, however, does not suffice to establish (1.33) via a union bound, since the number of subsets \(0\in A\subset\mathbb{Z}^{d}\), connected with connected complement, which have \(|\partial A|=\ell\) is at least \(c_{d}\exp(C_{d}\ell)\) (see [1, Theorem 6 and Theorem 7] and Appendix A). Instead, the estimate (1.33) is derived from the concentration bound (1.35) using a coarse-graining technique (or chaining argument) introduced by Fisher-Frohlich-Spencer [10] in a closely-related context. To this end one defines \(A_{N}\), the \(N\)-coarse-grained version of \(A\subset\mathbb{Z}^{d}\), as the union of all cubes \(B\subset\mathbb{Z}^{d}\), of the form \(v+\{0,1,\ldots,N-1\}^{d}\) with \(v\in N\mathbb{Z}^{d}\), which satisfy
\(|A\cap B|\geq\frac{1}{2}|B|\). Then, one writes the chaining expansion
\[\mathrm{GE}^{\mathrm{RFIM},\zeta,L}-\mathrm{GE}^{\mathrm{RFIM},\zeta^{A},L}=\sum_ {k=0}^{K-1}\left(\mathrm{GE}^{\mathrm{RFIM},\zeta^{A_{2}k+1},L}-\mathrm{GE}^{ \mathrm{RFIM},\zeta^{A_{2}k},L}\right) \tag{1.37}\]
where \(K\) is chosen sufficiently large that \(A_{2^{K}}=\emptyset\) (so that \(\zeta^{A_{2^{K}}}=\zeta\)), and noting that \(A_{2^{0}}=A_{1}=A\). A version of the concentration inequality (1.35) is available (with the same proof) for any two finite \(A^{\prime},A^{\prime\prime}\subset\mathbb{Z}^{d}\),
\[\mathbb{P}\left(\big{|}\,\mathrm{GE}^{\mathrm{RFIM},\zeta^{A^{\prime}},L}- \mathrm{GE}^{\mathrm{RFIM},\zeta^{A^{\prime\prime}},L}\,\big{|}\geq t\right) \leq C\exp\left(-c\frac{t^{2}}{\lambda^{2}|A^{\prime}\Delta A^{\prime\prime}|} \right). \tag{1.38}\]
The idea of the coarse-graining technique is to apply the concentration bound (1.38) to each of the terms on the right-hand side of (1.37) (with suitable \(t_{k}\) summing to \(2|\partial A|\)), using a union bound over the possible \(A_{2^{k}}\) and bounds for \(|A_{2^{k}}\Delta A_{2^{k+1}}|\), for \(0\leq k\leq K-1\). The gain over the direct application (1.36) of (1.35) lies in the smaller denominator in the right-hand side of the concentration inequality (1.38) compared to (1.35), and the fact that the number of possibilities for \(A_{2^{k}}\) is greatly reduced as \(k\) increases (roughly, \(|\partial A_{N}|\approx|\partial A|\) so that \(A_{N}\) may be regarded as a set with surface volume \(|\partial A|/N^{d-1}\) after shrinking the lattice \(\mathbb{Z}^{d}\) by a factor \(N\). This is complicated, however, by the fact that \(A_{N}\) need not be connected or have connected complement).
#### 1.2.2. The disordered Solid-On-Solid model
It is instructive to first try and adapt the above approach to the disordered SOS model (1.28), before discussing the disordered ferromagnet. The goal there is to recover a version of the result of [1], showing that in dimensions \(d\geq 3\) when, say, the disorder (\(\zeta_{v,k}\)) is given by independent Gaussians with _small variance_, then there is localization of the ground configuration \(\varphi^{\zeta,L}\) in \(\{-L,\ldots,L\}^{d}\) with zero boundary values. To this end, it suffices to show that it is unlikely that there exists an integer \(m\geq 0\) and some \(A\subset\mathbb{Z}^{d}\), connected with connected complement and containing the origin, such that \(\varphi\geq m+1\) (\(\varphi\leq m\)) on the interior (exterior) vertex boundary of \(A\). We have checked (but do not provide full details here) that a proof may be carried out very similarly to the RFIM case with the main difference being that the discrete \(\pm 1\) symmetry of the RFIM is now replaced by the discrete translation symmetry of adding an integer constant to \(\varphi\). Thus, instead of (1.30), a new configuration and disorder are defined by
\[\varphi^{\zeta,L,A}:=\varphi^{\zeta,L}-1_{A}, \tag{1.39}\] \[\zeta^{A}_{(v,k)}:=\begin{cases}\zeta_{(v,k+1)}&v\in A\\ \zeta_{(v,k)}&v\notin A\end{cases}\]
(where \(1_{A}\) is the indicator function of \(A\)), leading to the energy gap
\[H^{\mathrm{SOS},\zeta}(\varphi^{\zeta,L})-H^{\mathrm{SOS},\zeta^{A}}(\varphi^ {\zeta,L,A})\geq|\partial A|. \tag{1.40}\]
While we do not enter into further detail, we remind that the disordered SOS model may be seen as a special case of the anisotropic disordered ferromagnet; see Remark 1.12.
The above sketch for the disordered SOS model may also be adapted to low, positive temperatures, similarly to the argument of [10]. However, such an extension for the disordered ferromagnet requires additional ideas (see Section 9.2 for further discussion).
#### 1.2.3. The disordered ferromagnet
We proceed to overview our approach to proving Theorem 1.7 - localization of the Dobrushin interface in the disordered ferromagnet. While the approach adapts several of the ideas appearing above, it is significantly more complicated, essentialy due to the fact that the Dobrushin interface may have overhangs (i.e., have several parallel interface plaquettes in the same "column"). Below we describe the obstacles that arise and our methods for overcoming them.
We work in dimension \(D=d+1\geq 4\) under the assumptions of Theorem 1.7. For finite \(\Lambda\subset\mathbb{Z}^{d}\), we write \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) for the ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) as given by Lemma 1.5, and we let \(\mathrm{GE}^{\Lambda}(\eta)\) be its energy (i.e., the ground energy) in the coupling field \(\eta\) (see (4.6) below for a precise definition). Our goal is to show that, for \((v_{0},k_{0})\in\mathbb{Z}^{D}\), the event
\[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v_{0},k_{0})}\neq\rho^{\mathrm{Dob}}_{(v _{0},k_{0})} \tag{1.41}\]
is unlikely (with \(\rho^{\mathrm{Dob}}\) defined in (1.17)).
##### 1.2.3.1. Shifts and energy gap
We aim to obtain an energy gap after a transformation of the configuration and the disorder based on a discrete symmetry, in a similar way to (1.39) and (1.40). The symmetry used is the translation of the \(\mathbb{Z}^{D}\) lattice along its last coordinate, but its use is more complicated than in the disordered SOS model. The amount to translate by is encoded by a function \(\tau:\mathbb{Z}^{d}\to\mathbb{Z}\) having finite \(\mathrm{supp}(\tau):=\{v\in\mathbb{Z}^{d}\colon\tau(v)\neq 0\}\); we call any such function a _shift_.
The shifted disorder \(\eta^{\tau}\) is defined as follows: We fix, once and for all, an arbitrary function \(\iota:E(\mathbb{Z}^{d})\to\mathbb{Z}^{d}\) that chooses an endpoint for each edge (i.e., \(\iota(e)\in e\)). Then
\[\eta^{\tau}_{e}:=\begin{cases}\eta_{e+(0,\tau(u))}&e=\{(u,k),(u,k+1)\}\in E^{ \parallel}(\mathbb{Z}^{D}),\\ \eta_{e+(0,\tau(\iota(\{u,v\})))}&e=\{(u,k),(v,\ell)\}\in E^{\perp}(\mathbb{Z }^{D}),\end{cases} \tag{1.42}\]
where \(\{x,y\}+z=\{x+z,y+z\}\) for \(x,y,z\in\mathbb{Z}^{D}\) (i.e., the "column of disorders" above a base vertex \(u\) is shifted by \(\tau(u)\), and the "column" above a base edge \(\{u,v\}\in E(\mathbb{Z}^{d})\) is shifted by \(\tau(\iota(\{u,v\}))\); see also (4.7)) and (4.8)). Two useful features of this definition are that \(\iota(\{u,v\})\) is unimportant when \(\tau(u)=\tau(v)\) and that \((\eta^{\tau_{1}})^{\tau_{2}}=\eta^{\tau_{1}+\tau_{2}}\) for \(\tau_{1},\tau_{2}\) shifts.
The action on the configuration \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is more complicated, since a simple shift would not suffice to eliminate overhangs. Instead, our definition involves an additional subset \(\tilde{A}\subset\mathbb{Z}^{d}\) (we take \(\tilde{A}\) to be the projection to \(\mathbb{Z}^{d}\) of the overhangs and "interface walls" that we would like to remove; see Section 1.2.3.2 below for our precise definition) and we define, for \((u,k)\in\mathbb{Z}^{D}\),
\[\sigma^{\eta,\Lambda,\mathrm{Dob},\tau,\tilde{A}}_{(u,k)}:=\begin{cases}\sigma ^{\eta,\Lambda,\mathrm{Dob}}_{(u,k+\tau(u))}&u\notin\tilde{A},\\ \rho^{\mathrm{Dob}}_{(u,k)}&u\in\tilde{A}.\end{cases} \tag{1.43}\]
The energy gap obtained from this definition is the difference
\[\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta^{\tau})\geq H^{\eta}( \sigma^{\eta,\Lambda,\mathrm{Dob}})-H^{\eta^{\tau}}(\sigma^{\eta,\Lambda, \mathrm{Dob},\tau,\tilde{A}}). \tag{1.44}\]
We choose \(\tau\) and \(\tilde{A}\) so that the right-hand side consists exactly of (twice) the coupling constants corresponding to the overhangs and walls of \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) above \(\tilde{A}\) (more precisely, regarding the overhangs, for each \(u\in\tilde{A}\) such that \(\{u\}\times\mathbb{Z}\) has multiple parallel interface plaquettes we gain the coupling constants of all these plaquettes except the one between \((u,\tau(u))\) and
\((u,\tau(u)+1)\)). This is implied by the following compatibility relations:
If
\[\{u,v\}\in E(\mathbb{Z}^{d})\]
and
\[\{u,v\}\not\subset\tilde{A}\]
then
\[\tau(u)=\tau(v)\]
. (1.45)
If
\[u\in\tilde{A}\]
,
\[v\notin\tilde{A}\]
and
\[\{u,v\}\in E(\mathbb{Z}^{d})\]
then
\[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(u,k+\tau(u))}=\rho^{\mathrm{Dob}}_{u,k}\]
for
\[k\in\mathbb{Z}\]
. (1.46)
If
\[u\in\tilde{A}\]
then
\[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(u,\tau(u))}=-\sigma^{\eta,\Lambda, \mathrm{Dob}}_{(u,\tau(u)+1)}\]
(our construction also gives
\[\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(u,\tau(u))}=-1\]
). (1.47)
A key role in our proof of Theorem 1.7 is thus played by defining \(\tau\) and \(\tilde{A}\) as above so that: (i) a sufficient energy gap is generated when (1.41) holds, and (ii) the shift \(\tau\) is taken from a small enough class (that we call _admissible shifts_; see Section 1.2.3.5 below) for which we may develop suitable enumeration theorems for the required union bounds (there is no need to also enumerate over \(\tilde{A}\) as it does not appear on the left-hand side of (1.44)).
#### 1.2.3.2. Definition of \(\tau\) and \(\tilde{A}\)
Let \(E\subset\Lambda\) (initially \(E=\{v_{0}\}\) for the \(v_{0}\) of (1.41). However, later parts in our argument necessitate consideration of more general \(E\)). We aim to define \(\tilde{A}\) as the "projection to \(\mathbb{Z}^{d}\) of the places with overhangs and interface walls which surround \(E\)" and to define \(\tau\) in a compatible and admissible manner. Our definitions are motivated by the ideas of Dobrushin [10], to which we add a new result (Lemma 3.1) in order to define \(\tau\) as an admissible shift. In fact, the absence of bubbles (_finite_ connected components of spins of one sign) in our zero-temperature setup allows us to simplify the approach of [10] and we present a self-contained treatment in Section 7 (with some inspiration from [23, 20]). This also yields an improved dependence on the dimension \(d\). A brief description of our construction follows.
First, we define a function \(I:\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\), which partitions \(\mathbb{Z}^{d}\) into different regions according to the height of the Dobrushin interface, as follows:
1. \(I(v)=k\) if \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) has a _unique_ sign change in \(\{v\}\times\mathbb{Z}\), with \(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k)}=-1\) and \(\sigma^{\eta,\Lambda,\mathrm{Dob}}_{(v,k+1)}=1\),
2. \(I(v)=\text{``layered''}\) if \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) has _multiple_ sign changes in \(\{v\}\times\mathbb{Z}\).
Define the set \(V_{\sigma^{\eta,\Lambda,\mathrm{Dob}}}\subset Z^{d}\) (the "projected interface vertices") as those \(v\) satisfying that there exists an edge \(\{u,v\}\in E(\mathbb{Z}^{d})\) with either \(I(u)\neq I(v)\) or \(I(u)=I(v)=\text{``layered''}\) (i.e., all layered vertices and their neighbors and all non-layered vertices having a neighbor with a different value of \(I\)). We then define \(\tilde{A}\) to be the union of those connected components of \(V_{\sigma^{\eta,\Lambda,\mathrm{Dob}}}\) which surround \(E\) (i.e., those connected components \(C\) for which some vertex of \(E\) lies in a finite connected component of \(\mathbb{Z}^{d}\setminus C\)).
Second, we define a "pre-shift" \(\tau_{0}:\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\) as follows: For \(v\in\tilde{A}\) we set \(\tau_{0}(v)=I(v)\). For \(v\notin\tilde{A}\), we let \(B_{v}\) be the connected component of \(v\) in \(\mathbb{Z}^{d}\setminus\tilde{A}\) and observe that \(I\) is necessarily some constant integer on the external vertex boundary of \(B_{v}\); then we set \(\tau_{0}(v)\) equal to this constant (necessarily \(\tau_{0}(v)=0\) if \(B_{v}\) is infinite).
Third, the requisite shift \(\tau\) is formed from \(\tau_{0}\) by setting \(\tau(v)=\tau_{0}(v)\) whenever \(\tau_{0}(v)\in\mathbb{Z}\) and choosing values \(\tau(v)\in\mathbb{Z}\) at those \(v\) where \(\tau_{0}(v)=\text{``layered''}\) (such \(v\) are necessarily in \(\tilde{A}\)). While our choice is limited by the compatibility relation (1.47), this still leaves it significant freedom; the main limiting factor is our requirement that \(\tau\) be an admissible shift. To choose the values we use our Lemma 3.1, which gives a mechanism for modifying the configuration \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) on each connected component of layered vertices into a configuration \(\sigma^{\prime}\) with the properties: (i) \(\sigma^{\prime}\) has no overhangs, (ii) if \(\sigma^{\prime}_{(v,k)}=-\sigma^{\prime}_{(v,k+1)}\) at some \((v,k)\) then the same holds for \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) at \((v,k)\), and (iii) \(\sigma^{\prime}\) has a fewer or equal number of perpendicular interface
plaquettes than \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\). Choosing \(\tau\) on layered vertices to be the height of the unique sign change in \(\sigma^{\prime}\) is shown to yield the requisite admissible shift.
#### 1.2.3.3. Chaining and concentration
The above discussion implies that on the event (1.41) there exists an admissible shift \(\tau\) inducing an energy gap in (1.44) (and this gap is large if \(k_{0}\) of (1.41) is large). Consequently, it remains to prove Theorem 4.3 below, which states that it is unlikely that there exists any admissible shift producing an energy gap which is large in absolute value. To this end, motivated by the chaining expansion (1.37) of the RFIM, our first step is to write
\[\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta^{\tau})=\sum_{k=0}^{K-1 }\left(\mathrm{GE}^{\Lambda}(\eta^{\tau_{2k+1}})-\mathrm{GE}^{\Lambda}(\eta^{ \tau_{2k}})\right) \tag{1.48}\]
where \(\tau_{N}\) represents some notion of \(N\)-coarse-graining of \(\tau\), with \(\tau_{2^{0}}=\tau_{1}=\tau\) and with \(K\) large enough that \(\tau_{2^{K}}\equiv 0\) (so that \(\eta^{\tau_{2K}}=\eta\)). We choose to define \(\tau_{N}:\mathbb{Z}^{d}\to\mathbb{Z}\) as a function which is constant on cubes of the form \(v+\{0,1,\ldots,N-1\}^{d}\) with \(v\in N\mathbb{Z}^{d}\), and equal on each such cube \(B\) to the average of \(\tau\) on \(B\) rounded to the closest integer (arbitrarily rounding \(k+1/2\) to \(k\) for integer \(k\)). Significant effort is then devoted in Section 6.2 and Section 6.3 to develop an enumeration theory (reminiscent of [10]) for the number of possibilities for \(\tau_{N}\) according to the complexity of \(\tau\) (complexity is discussed in Section 1.2.3.5 below). The proof also introduces an extra "fine grained" shift \(\tau_{I}\), for \(I\subset[d]=\{1,\ldots,d\}\), which "lies between" \(\tau\) and \(\tau_{2}\) and is obtained by averaging and rounding \(\tau\) on boxes of the form \(v+\{0,1\}^{I}\times\{0\}^{[d]\setminus I}\). This extra ingredient allows our assumptions on the disorder distributions ((1.7) and (1.14)) to become less restrictive as the dimension \(d\) increases.
The next step following (1.48) is to obtain a concentration inequality for the ground energy differences appearing in its right-hand side, similar to the concentration inequality (1.38) of the RFIM. Here, however, lies a major hurdle in our analysis, as the available inequality is significantly weaker than the one available for the RFIM or the disordered SOS model. Let us describe the inequality that we have. Let \(\tau_{1},\tau_{2}\) be shift functions. We introduce a version of the ground energy in which we minimize over a restricted set of configurations: For \(A\subset\mathbb{Z}^{d}\) and \(b^{\parallel},b^{\perp}\geq 0\), let \(\mathrm{GE}^{\Lambda,\Lambda,(b^{\parallel},b^{\perp})}(\eta)\) be the minimal energy in the coupling field \(\eta\) among configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) which have at most \(b^{\parallel}\) parallel plaquettes and at most \(b^{\perp}\) perpendicular plaquettes above \(A\) in the Dobrushin interface (see (6.3) for a precise definition). Then (see Lemma 6.1)
\[\mathbb{P}\left(\left|\,\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau_{ 1}-\tau_{2}),(b^{\parallel},b^{\perp})}(\eta^{\tau_{1}})-\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau_{1}-\tau_{2}),(b^{\parallel},b^{\perp})}(\eta^{\tau_{2}}) \right|\geq t\right)\\ \leq C\exp\left(-c\frac{t^{2}}{\mathrm{wid}(\nu^{\parallel})^{2 \boldsymbol{b}^{\parallel}}+\mathrm{wid}(\nu^{\perp})^{2}b^{\perp}}\right), \tag{1.49}\]
so that the concentration estimate deteriorates as \(b^{\parallel}\) and \(b^{\perp}\) grow. Thus, in order to apply (1.49) to the \(k\)th term in (1.48) (and successfully use a union bound over the possible \(\eta^{\tau_{2k}}\) and \(\eta^{\tau_{2k+1}}\)) we need that for sufficiently small \(b^{\parallel}_{k}(s)\) and \(b^{\perp}_{k}(s)\) (depending on \(k\) and the energy gap \(s\); see Lemma 6.2),
\[\mathrm{GE}^{\Lambda}(\eta^{\tau_{2k}})=\mathrm{GE}^{\Lambda, \mathrm{supp}(\tau_{2k+1}-\tau_{2k}),(b^{\parallel}_{k}(s),b^{\perp}_{k}(s))} (\eta^{\tau_{2k}}), \tag{1.50}\] \[\mathrm{GE}^{\Lambda}(\eta^{\tau_{2k+1}})=\mathrm{GE}^{\Lambda, \mathrm{supp}(\tau_{2k+1}-\tau_{2k}),(b^{\parallel}_{k}(s),b^{\perp}_{k}(s))} (\eta^{\tau_{2k+1}}).\]
However, we are not guaranteed that these equalities hold!
#### 1.2.3.4. The maximal energy gap
It remains to deal with the case that one of the inequalities (1.50) is violated. Our strategy is to show that in this case there is a new admissible shift \(\tau^{\prime}\) inducing a significantly larger absolute energy gap \(|\operatorname{GE}^{\Lambda}(\eta)-\operatorname{GE}^{\Lambda}(\eta^{\tau^{ \prime}})|\) than the shift \(\tau\). The argument then proceeds by focusing on the admissible shift with the _maximal_ energy gap and deducing that for that shift all the equalities (1.50) hold.
To this end, suppose, e.g., that the first equality in (1.50) is violated. Set \(E:=\operatorname{supp}(\tau_{2^{k+1}}-\tau_{2^{k}})\). By definition, this means that \(\sigma^{\eta^{\tau_{2^{k}}},\Lambda,\operatorname{Dob}}\) either has more than \(b_{k}^{\parallel}(s)\) parallel interface plaquettes above \(E\) or has more than \(b_{k}^{\perp}(s)\) perpendicular interface plaquettes above \(E\). We may thus use the construction of Section 1.2.3.2, with \(\sigma^{\eta^{\tau_{2^{k}}},\Lambda,\operatorname{Dob}}\) and \(E\), to define \(\tau^{\prime}\) and \(\tilde{A}^{\prime}\) inducing a large energy gap \(\operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})-\operatorname{GE}^{\Lambda} ((\eta^{\tau_{2^{k}}})^{\tau^{\prime}})\). If \(b_{k}^{\parallel}(s)\) and \(b_{k}^{\perp}(s)\) are not too small (see Lemma 6.2 for their value) then the new gap will indeed be much greater than the old one, as we require. One difficulty, however, is that the new gap is induced for the shifted disorder \(\eta^{\tau_{2^{k}}}\) rather than for the original disorder \(\eta\). This is simply resolved though, since
\[\begin{split}&\operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})- \operatorname{GE}^{\Lambda}((\eta^{\tau_{2^{k}}})^{\tau^{\prime}})= \operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})-\operatorname{GE}^{\Lambda} (\eta^{\tau_{2^{k}}+\tau^{\prime}})\\ &=\left(\operatorname{GE}^{\Lambda}(\eta^{\tau_{2^{k}}})- \operatorname{GE}^{\Lambda}(\eta)\right)-\left(\operatorname{GE}^{\Lambda}( \eta^{\tau_{2^{k}}+\tau^{\prime}})-\operatorname{GE}^{\Lambda}(\eta)\right) \end{split} \tag{1.51}\]
so that a large energy gap induced for the shifted disorder \(\eta^{\tau_{2^{k}}}\) implies a large energy gap in absolute value for the original disorder (induced either by the shift \(\tau_{2^{k}}\) or by the shift \(\tau_{2^{k}}+\tau^{\prime}\)).
#### 1.2.3.5. Admissible shifts
The above argument may give rise to shift functions with a complicated structure. Initially, given the input (1.41), we construct a relatively simple shift \(\tau\) (and set \(\tilde{A}\)) in order to remove the interface walls surrounding the vertex \(v_{0}\). However, as explained in Section 1.2.3.4 above, we may need to replace the shift \(\tau\) by the shifts \(\tau_{2^{k}}\) or \(\tau_{2^{k}}+\tau^{\prime}\) appearing in (1.51), and upon iterating this procedure the shifts may become more and more complicated. We thus need to define a class of shifts which, on the one hand, is broad enough to be closed under such operations and, on the other hand, is narrow enough to enable efficient enumeration (of the number of possibilities for the shift and its coarse grainings), allowing the union bounds in the chaining argument to go through. This is our motivation for defining in Section 4.1.3 the class of _admissible_ shifts, which depends on the coupling field \(\eta\). We measure the complexity of a shift \(\tau\) by its total variation \(\operatorname{TV}(\tau)\) (i.e., the \(\ell_{1}\)-norm of its gradient) and by a quantity \(R(\tau)\) that we call _trip entropy_, which is the minimal length of a path visiting all level components of \(\tau\) (i.e., visiting all connected components of level sets of \(\tau\)). Admissible shifts are then defined as those that induce an energy gap for the coupling field \(\eta\) that is sufficiently large compared to the complexity of the shift. This definition turns out to strike the requisite balance between broadness and narrowness.
## 2. Notation, conventions and concentration results
We use the convention \(\mathbb{N}:=\{1,2,\ldots\}\). For \(k\in\mathbb{N}\), we let \([k]:=\{1,2,\ldots,k\}\) and for any set \(A\), let
\[\binom{A}{k}:=\{I\subseteq A\colon|I|=k\}\]
be the family of subsets of size \(k\) of \(A\).
For \(x\in\mathbb{R}^{m}\) and \(p\geq 1\) we let \(\|x\|_{p}=(\sum_{i=1}^{m}|x_{i}|^{p})^{1/p}\) be the standard \(p\)-norm.
Unless explicitly stated otherwise, all "geometric" notions in \(\mathbb{Z}^{d}\) are with respect to the \(\ell_{1}\) metric. In particular, the (closed) ball of radius \(r\geq 0\) around \(a\in\mathbb{Z}^{d}\) is
\[\mathcal{B}_{r}(a):=\{v\in\mathbb{Z}^{d}\colon\|v-a\|_{1}\leq r\},\]
the diameter of a bounded set \(A\subset\mathbb{Z}^{d}\) is \(\operatorname{diam}(A)=\max_{u_{1},u_{2}\in A}\|u_{1}-u_{2}\|_{1}\), the distance from \(\omega\in\mathbb{Z}^{d}\) to a non-empty set \(A\subset\mathbb{Z}^{d}\) is \(\operatorname{dist}(\omega,A)=\min_{u\in A}\|\omega-u\|_{1}\) and the distance between two non-empty sets \(A,B\subset\mathbb{Z}^{d}\) is \(\operatorname{dist}(A,B)=\min_{u\in A,\,v\in B}\|u-v\|_{1}\); we say that \(u,v\in\mathbb{Z}^{d}\) are adjacent, and denote it by \(u\sim v\), if \(\|u-v\|_{1}=1\); let \(E(\mathbb{Z}^{d}):=\{\{u,v\}\in\binom{\mathbb{Z}^{d}}{2}\colon u\sim v\}\); the _edge boundary_ of a set \(A\subset\mathbb{Z}^{d}\) is
\[\partial A:=\{(u,v)\colon u\in A,\,v\in\mathbb{Z}^{d}\setminus A,\,u\sim v\}\]
its _inner vertex boundary_ is
\[\partial^{\mathrm{in}}A:=\{u\in A\colon\exists v\in\mathbb{Z}^{d}\setminus A \text{ such that }u\sim v\},\]
and its _outer vertex boundary_ is
\[\partial^{\mathrm{out}}A:=\{v\in\mathbb{Z}^{d}\setminus A\colon\exists u\in A \text{ such that }u\sim v\}.\]
Denote by \(\pi\) the projection from \(\mathbb{Z}^{d+1}\) to \(\mathbb{Z}^{d}\) defined by \(\pi(x_{1},\dots,x_{d},x_{d+1})=(x_{1},\dots,x_{d})\).
The proofs of our main results require a concentration inequality for the minimal energy of configurations of the disordered ferromagnet. According to whether the disorder distributions have compact support or are Lipschitz functions of a Gaussian, one of the following two inequalities will be used.
A function \(f:D\to\mathbb{R}\), defined on a subset \(D\subset\mathbb{R}^{n}\), is said to be Lipschitz with constant \(L>0\) if
\[|f(x)-f(y)|\leq L\|x-y\|_{2},\qquad x,y\in D. \tag{2.1}\]
**Theorem 2.1** (Gaussian concentration inequality; see, e.g. [1, Theorem 5.6]).:
_Let \(g_{1},\dots,g_{n}\) be independent standard Gaussian random variables. Suppose \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) is Lipschitz with constant \(L\). Set \(X:=f(g_{1},\dots,g_{n})\). Then \(\mathbb{E}(|X|)<\infty\) and for each \(t>0\),_
\[\mathbb{P}(|X-\mathbb{E}(X)|\geq t)\leq 2e^{-\frac{t^{2}}{2L^{2}}}.\]
The theorem is part of the Gaussian concentration phenomenon as initiated by Paul Levy, Christer Borell, Tsirelson-Ibragimov-Sudakov and Maurey-Pisier.
A function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is called _quasi-convex_ if \(\{x\in\mathbb{R}^{n}\colon f(x)\leq s\}\) is a convex set for every \(s\in\mathbb{R}\).
**Theorem 2.2** ([1, Theorem 7.12], going back to Johnson-Schechtman [15], following Talagrand [16]).: _Let \(z_{1},...,z_{n}\) be independent random variables taking values in the interval \([0,1]\) and let \(f:[0,1]^{n}\to\mathbb{R}\) be a quasi-convex function which is also Lipschitz with constant \(1\). Set \(X:=f(z_{1},\dots,z_{n})\). Then, for each \(t>0\),_
\[\mathbb{P}(|X-\mathrm{med}(X)|\geq t)\leq 4e^{-\frac{t^{2}}{4}} \tag{2.2}\]
_where \(\mathrm{med}(X)\) is any median of \(X\)._
We remark that it is standard (and simple; see [16, p. 142]) that (2.2) implies the same conclusion with the median replaced by the (necessarily finite) expectation, in the form
\[\mathbb{P}(|X-\mathbb{E}(X)|\geq t)\leq Ce^{-ct^{2}} \tag{2.3}\]
for some universal constants \(C,c>0\).
For our later use, it is convenient to deduce a unified result from the previous two theorems, applicable to distributions of finite width. For a random variable \(W\) we set
\[\operatorname{wid}(W):=\operatorname{wid}(\mathcal{L}(W)) \tag{2.4}\]
where \(\mathcal{L}(W)\) is the distribution of \(W\) (and \(\operatorname{wid}(\mathcal{L}(W))\) is defined by (1.5)).
**Corollary 2.3**.: _There exist \(C,c>0\) such that the following holds. Let \(W_{1},\ldots,W_{n}\) be independent random variables with \(0<\operatorname{wid}(W_{i})<\infty\) for all \(i\). Suppose \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) is a quasi-convex function which is Lipschitz with constant \(L>0\) in the sense of (2.1). Set_
\[X:=f\left(\frac{W_{1}}{\operatorname{wid}(W_{1})},\ldots,\frac{W_{n}}{ \operatorname{wid}(W_{n})}\right). \tag{2.5}\]
_Then \(\mathbb{E}(|X|)<\infty\) and for each \(t>0\),_
\[\mathbb{P}(|X-\mathbb{E}(X)|\geq t)\leq Ce^{-c\frac{t^{2}}{L^{2}}}. \tag{2.6}\]
We remark regarding the restriction \(\operatorname{wid}(W_{i})>0\) that a distribution \(\nu\) with \(\operatorname{wid}(\nu)=0\) is supported on a single point. Indeed, this is clear if \(\operatorname{wid}(\nu)=\operatorname{diam}(\nu)\), while if \(\operatorname{wid}(\nu)=\operatorname{Lip}(\nu)\) then one may either argue directly or deduce the fact from Theorem 2.1.
Proof of Corollary 2.3.: Let us assume, without loss of generality, that \(0\leq k\leq n\) is such that \(\operatorname{wid}(W_{i})=\operatorname{Lip}(\mathcal{L}(W_{i}))\) for \(1\leq i\leq k\) while \(\operatorname{wid}(W_{i})=\operatorname{diam}(\mathcal{L}(W_{i}))\) for \(k+1\leq i\leq n\). By subtracting suitable constants from the \(W_{i}\) with \(k+1\leq i\leq n\) we may further assume, without loss of generality, that each such \(W_{i}\) is supported on an interval of the form \([0,a_{i}]\) with \(\operatorname{diam}(W_{i})=a_{i}\). This implies that \(W_{i}/\operatorname{wid}(W_{i})\in[0,1]\) for \(k+1\leq i\leq n\), as will be required for using Theorem 2.2.
It suffices to prove that for any \(t>0\) we have
\[\mathbb{P}(|X-\mathbb{E}(X\,|\,W_{1},\ldots,W_{k})|\geq t\,|\,W_{1},\ldots,W_ {k})\leq Ce^{-c\frac{t^{2}}{L^{2}}} \tag{2.7}\]
almost surely, and
\[\mathbb{P}(|\mathbb{E}(X|W_{1},\ldots,W_{k})-\mathbb{E}(X)|\geq t)\leq Ce^{- c\frac{t^{2}}{L^{2}}}. \tag{2.8}\]
Inequality (2.7) follows Theorem 2.2, in the form (2.3). To see this, first note that \(f/L\) is a quasi-convex function which is Lipschitz with constant \(1\). Conclude that, for each fixed values of \(x_{1},\ldots,x_{k}\in\mathbb{R}\), the restricted function \(x_{k+1},\ldots,x_{n}\mapsto f(x_{1},\ldots,x_{k},x_{k+1},\ldots,x_{n})/L\) satisfies the same properties, and finally recall that \(W_{i}/\operatorname{wid}(W_{i})\in[0,1]\) for \(k+1\leq i\leq n\).
We proceed to deduce inequality (2.8). Observe first that the average of a Lipschitz function with respect to some of its variables is still a Lipschitz function, with the same constant, of the remaining variables. In particular, the function
\[\tilde{f}(x_{1},\ldots,x_{k}):=\mathbb{E}\left(f\left(x_{1},\ldots,x_{k}, \frac{W_{k+1}}{\operatorname{wid}(W_{k+1})},\ldots,\frac{W_{n}}{\operatorname {wid}(W_{n})}\right)\right) \tag{2.9}\]
is Lipschitz with constant \(L\). Fix \(\varepsilon>0\). Let \(g_{1},\ldots,g_{k}\) be a independent standard Gaussian random variables. Write, for \(1\leq i\leq k\), \(W_{i}=h_{i}(g_{i})\) where \(h_{i}:\mathbb{R}\to\mathbb{R}\) satisfies \(\operatorname{Lip}(h_{i})\leq\operatorname{Lip}(W_{i})(1+\varepsilon)\). It follows that \((y_{1},\ldots,y_{k})\mapsto\tilde{f}\left(\frac{h_{1}(y_{1})}{\operatorname {wid}(W_{1})},\ldots,\frac{h_{k}(y_{k})}{\operatorname{wid}(W_{k})}\right)\) is a Lipschitz function with constant \(L(1+\varepsilon)\). Inequality (2.8) then follows from Theorem 2.1, taking into account that \(\varepsilon\) is arbitrary.
## 3. Disorders which are constant on perpendicular plaquettes
Say that an Ising configuration \(\sigma\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) is _interfacial_ if for every \(v\in\mathbb{Z}^{d}\)
\[\lim_{k\to-\infty}\sigma_{(v,k)}=-1\text{ and }\lim_{k\to\infty}\sigma_{(v,k)}=1. \tag{3.1}\]
A configuration \(\sigma\) is said to have _no overhangs_ if it is interfacial and for every \(v\in\mathbb{Z}^{d}\), there is a _unique_\(k\) for which \(\sigma_{(v,k)}=-\sigma_{(v,k+1)}\).
Recall the definition of \(\Omega^{\Delta,\rho}\) and the definition of a ground configuration in \(\Omega^{\Delta,\rho}\) from (1.15). We use these here with \(\Delta=\Lambda\times\mathbb{Z}\) for a finite \(\Lambda\subset\mathbb{Z}^{d}\) and a \(\rho\) with no overhangs. Note that a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\) may not exist for a general coupling field \(\eta\). However, such a ground configuration, which is moreover interfacial, will exist if \(\inf_{e\in E(\mathbb{Z}^{D})}\eta_{e}>0\) (see a related discussion after Observation 4.1).
**Lemma 3.1**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) be finite and let \(\rho\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) have no overhangs. Suppose the coupling field \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) satisfies that \(\eta\) is constant on \(E^{\perp}(\mathbb{Z}^{d+1})\). Then for each interfacial configuration \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) there exists \(\sigma^{\prime}\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) with no overhangs such that \(H^{\eta}(\sigma^{\prime})\leq H^{\eta}(\sigma)\) and whenever \(\{x,x+e_{d+1}\}\in E(\mathbb{Z}^{d+1})\) is such that \(\sigma^{\prime}_{x}=-1\) and \(\sigma^{\prime}_{x+e_{d+1}}=1\) then also \(\sigma_{x}=-1\) and \(\sigma_{x+e_{d+1}}=1\)._
_Consequently, if \(\eta\) is such that there exists an interfacial ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\), then there also exists a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\) which has no overhangs._
We note that in the terminology of (3.2) below, the lemma asserts that the sign changes of \(\sigma^{\prime}\) (having no overhangs) are contained in the odd sign changes of \(\sigma\).
The proof of the lemma uses the following preliminary definitions and proposition.
Fix \(\Lambda\subset\mathbb{Z}^{d}\) finite and a configuration \(\rho\) with no overhangs. We make the following definitions for an interfacial configuration \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\):
1. The next definitions capture a notion of "odd" and "even" sign changes in \(\sigma\), \[\begin{split}\text{OSC}(\sigma)&:=\{\{x,x+e_{d+1} \}\in E(\mathbb{Z}^{d+1})\colon\sigma_{x}=-1,\sigma_{x+e_{d+1}}=1\},\\ \text{ESC}(\sigma)&:=\{\{x,x+e_{d+1}\}\in E(\mathbb{ Z}^{d+1})\colon\sigma_{x}=1,\sigma_{x+e_{d+1}}=-1\}.\end{split}\] (3.2) Note that as \(\rho\) has no overhangs and \(\sigma\) is interfacial, then * for each \(v\in\mathbb{Z}^{d}\) there are finitely many \(k\) for which \(\{(v,k),(v,k+1)\}\in\text{OSC}(\sigma)\), with a unique such \(k\) when \(v\in\mathbb{Z}^{d}\setminus\Lambda\). * for each \(v\in\mathbb{Z}^{d}\), the number of \(\{(v,k),(v,k+1)\}\in\text{ESC}(\sigma)\) equals the number of \(\{(v,k),(v,k+1)\}\in\text{OSC}(\sigma)\) minus \(1\). In particular, if \(\{(v,k),(v,k+1)\}\in\text{ESC}(\sigma)\) then \(v\in\Lambda\).
2. Let \(\text{NESC}(\sigma)\) be the number of "adjacent even sign changes" in \(\sigma\), defined as the number of pairs \(\{\{(u,k),(u,k+1)\},\{(v,\ell),(v,\ell+1)\}\}\subset\text{ESC}(\sigma)\) satisfying that \(\{u,v\}\in E(\mathbb{Z}^{d})\) and \(k=\ell\).
3. Define the number of perpendicular domain wall plaquettes above \(\Lambda\) to be \[D^{\Lambda}(\sigma):=|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\} \cap(\Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|.\]
Finally, we define a partial order on interfacial configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\) as follows: Say that \(\sigma^{\prime}<\sigma\) if
\[D^{\Lambda}(\sigma^{\prime})\leq D^{\Lambda}(\sigma)\]
and, either
\[\text{OSC}(\sigma^{\prime})\subsetneq\text{OSC}(\sigma)\]
\[\text{OSC}(\sigma^{\prime})=\text{OSC}(\sigma)\text{ and }\text{NESC}(\sigma^{ \prime})>\text{NESC}(\sigma).\]
The following proposition is the key step in proving Lemma 3.1.
**Proposition 3.2**.: _Let \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) be interfacial. If \(\text{ESC}(\sigma)\neq\emptyset\) then there exists \(\sigma^{\prime}\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) such that \(\sigma^{\prime}<\sigma\) (in particular, \(\sigma^{\prime}\) is interfacial)._
Proof.: Fix a \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\) with \(\text{ESC}(\sigma)\neq\emptyset\). Fix some \(\{(v_{0},k_{0}),(v_{0},k_{0}+1)\}\in\text{ESC}(\sigma)\). Consider the set of all positions which are directly below even sign changes at height \(k_{0}\),
\[\Delta:=\{v\in\Lambda\colon\{(v,k_{0}),(v,k_{0}+1)\}\in\text{ESC}(\sigma)\}.\]
For a given height \(k\), define the sum of the configuration surrounding \(\Delta\) at height \(k\),
\[S(k):=\sum_{v\in\partial^{\text{out}}\Delta}\sigma_{(v,k)}.\]
The definition of \(\Delta\) implies that \(S(k_{0})\leq S(k_{0}+1)\). Thus, either \(S(k_{0})\leq 0\) or \(S(k_{0}+1)\geq 0\) (or both). Let us assume without loss of generality that \(S(k_{0})\leq 0\) as the other case can be treated analogously.
Define \(k_{1}\leq k_{0}\) to be the smallest integer with the following property: For all \(k_{1}\leq k\leq k_{0}\) it holds that
\[\sigma_{(v,k)}=\sigma_{(v,k_{0})}=1\quad\text{for }v\in\Delta,\]
\[\sigma_{(v,k)}\leq\sigma_{(v,k_{0})}\quad\text{for }v\in\partial^{\text{out}}\Delta.\]
The definition implies, in particular, that
\[S(k)\leq S(k_{0})\leq 0 \tag{3.3}\]
for all \(k_{1}\leq k\leq k_{0}\). Finally, define a configuration \(\sigma^{\prime}\) as follows
\[\sigma^{\prime}_{(v,k)}=\begin{cases}-1&v\in\Delta,k_{1}\leq k\leq k_{0},\\ \sigma_{(v,k)}&\text{otherwise}.\end{cases}\]
The inequality (3.3) implies that \(D^{\Lambda}(\sigma^{\prime})\leq D(\sigma)\). Moreover, the definition of \(k_{1}\) implies that either \(\text{OSC}(\sigma^{\prime})\subsetneq\text{OSC}(\sigma)\) or \(\text{OSC}(\sigma^{\prime})=\text{OSC}(\sigma)\) and \(\text{NESC}(\sigma^{\prime})>\text{NESC}(\sigma)\). Thus, \(\sigma^{\prime}<\sigma\), as we wanted to prove.
A repeated use of Proposition 3.2 yields the following corollary.
**Corollary 3.3**.: _For every interfacial \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho}\), there is an interfacial configuration \(\sigma^{\prime}\) that has a unique sign change above every vertex (i.e., \(\sigma^{\prime}\) has no overhangs), with \(\sigma\) having the same sign change at the same height, and \(D^{\Lambda}(\sigma^{\prime})\leq D^{\Lambda}(\sigma)\), i.e.,_
\[|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma^{\prime}_{x}\neq\sigma^{ \prime}_{y}\}|\\ \leq|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|. \tag{3.4}\]
Proof.: If \(\text{ESC}(\sigma)=\emptyset\) then \(\sigma\) has no overhangs and we are done. Otherwise, apply Proposition 3.2 iteratively to produce a sequence \(\sigma_{m}<\sigma_{m-1}<\cdots<\sigma_{0}=\sigma\) of configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\), with \(\text{ESC}(\sigma_{m})=\emptyset\) (the iterations necessarily terminate at some finite \(m\geq 1\) since the number of odd sign changes above each \(v\in\mathbb{Z}^{d}\) cannot increase and the number of even sign changes above each \(v\in\Lambda\) is no larger than the number of odd sign changes above \(v\)). Then, \(\sigma_{m}\) has no overhangs, and by the definition of the partial order, \(\text{OSC}(\sigma_{m})\subset\text{OSC}(\sigma)\) and \(D^{\Lambda}(\sigma_{m})\leq D^{\Lambda}(\sigma)\).
Lemma 3.1 immediately follows from Corollary 3.3.
Proof of Lemma 3.1.: Let \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) satisfy the properties in the lemma. Let \(\sigma\) be an interfacial configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho}\). Let \(\sigma^{\prime}\) be the configuration guaranteed by Corollary 3.3. Since \(\eta\) is constant on \(E^{\perp}(\mathbb{Z}^{d+1})\), it follows from (3.4) that \(H^{\eta}(\sigma^{\prime})\leq H^{\eta}(\sigma)\).
## 4. Stability of the ground energy under shifts of the disorder and a deduction of Theorem 1.7
In this section we present our main technical result, Theorem 4.3 below, which bounds the probability that certain "admissible shifts of the disorder" lead to a significant change in the energy of the ground configuration under Dobrushin boundary conditions. Our main localization theorem, Theorem 1.7, follows by combining Theorem 4.3 with the fact, stated in Lemma 4.4 below, that admissible shifts inducing large energy changes necessarily exist whenever the interface in the ground configuration deviates (in prescribed locations) from the flat interface. Theorem 4.3 will also be instrumental in the proof of Theorem 1.9 (presented in Section 8) on the convergence of the semi-infinite-volume ground configurations in the infinite-volume limit.
We begin in Section 4.1 with required definitions, continue in Section 4.2 with the statement of our main technical result and finally deduce Theorem 1.7 in Section 4.3.
### Preliminaries
This section contains the required definitions of ground energies, shifts and their action on the disorder, and admissibility of shifts.
#### 4.1.1. Coupling fields, energies and ground configurations
**Generic coupling fields.** We often work with coupling fields \(\eta\) whose values on all edges are uniformly bounded from \(0\). In addition, in order to ensure uniqueness of finite-volume ground configurations we ask that the coupling field \(\eta\) satisfies the assumption
\[\sum_{i=1}^{k}s_{i}\eta_{f_{i}}\neq 0,\quad k\in\mathbb{N},\{s_{i}\}_{i=1}^{k} \subseteq\{-1,1\},\{f_{i}\}_{i=1}^{k}\subset E(\mathbb{Z}^{d+1})\text{ and }\{f_{i}\}_{i=1}^{k}\nsubseteq E^{\perp}(\mathbb{Z}^{d+1}). \tag{4.1}\]
This is captured with the following notation: Given \(\alpha^{\|},\alpha^{\perp}\in(0,\infty)\) let
\[\mathcal{D}(\alpha^{\|},\alpha^{\perp}):=\left\{\eta\colon E(\mathbb{Z}^{d+1} )\to(0,\infty)\colon\begin{subarray}{c}\eta_{e}\in(\alpha^{\|},\infty)\text{ for }e\in E^{\|}(\mathbb{Z}^{d+1}),\\ \eta_{e}\in(\alpha^{\perp},\infty)\text{ for }e\in E^{\perp}(\mathbb{Z}^{d+1}), \\ \eta\text{ satisfies \eqref{eq:def
Now, given also a coupling field \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty]\) and a finite \(\Lambda\subset\mathbb{Z}^{d}\) define the Hamiltonian
\[\mathcal{H}^{\eta,\Lambda}(\sigma):=\sum_{\begin{subarray}{c}\{x,y\}\in E( \mathbb{Z}^{d+1})\\ \{x,y\}^{\cap}(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\eta_{\{x,y \}}(1-\sigma_{x}\sigma_{y})=2\sum_{\begin{subarray}{c}\{x,y\}\in E(\mathbb{Z}^ {d+1})\\ \{x,y\}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\eta_{\{x,y \}}1_{\sigma_{x}\neq\sigma_{y}} \tag{4.5}\]
and note that it is well defined on \(\Omega^{\Lambda,\mathrm{Dob}}\).
From the following observation it follows that the minimizers of the Hamiltonian \(\mathcal{H}^{\eta,\Lambda}\) in \(\Omega^{\Lambda,\mathrm{Dob}}\) coincide with the ground configurations discussed in Lemma 1.5. It is proved in appendix B for completeness.
**Observation 4.1**.: _Let \(\sigma,\sigma^{\prime}\in\Omega^{\Lambda,\mathrm{Dob}}\), and \(\eta:E(\mathbb{Z}^{D})\to[0,\infty)\). The following holds_
\[H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})=\mathcal{H}^{\eta,\Lambda}(\sigma) -\mathcal{H}^{\eta,\Lambda}(\sigma^{\prime}).\]
We note that when \(\eta\in\mathcal{D}\) then there is a unique minimizer of \(\mathcal{H}^{\eta,\Lambda}\) in \(\Omega^{\Lambda,\mathrm{Dob}}\). Indeed, there are only finitely many configurations in \(\Omega^{\Lambda,\mathrm{Dob}}\) whose energy is lower than \(\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\), and no two of them have equal energy by (4.1). With a slight abuse of notation, we will denote this unique minimizer by \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\), noting that it coincides with the minimizer of Lemma 1.5 under the assumptions there. We will use the terminology _ground energy_ to refer to the energy of the minimizing configuration. We thus define, for each \(\eta\in\mathcal{D}\) and finite \(\Lambda\subset\mathbb{Z}^{d}\),
\[\mathrm{GE}^{\Lambda}(\eta):=\mathcal{H}^{\eta,\Lambda}(\sigma^{\eta,\Lambda, \mathrm{Dob}}). \tag{4.6}\]
#### 4.1.2. Shifts of the coupling field
**Shifts and shifted coupling fields.** We use the term _shift_ to denote any function \(\tau\colon\mathbb{Z}^{d}\to\mathbb{Z}\) which equals zero except at finitely many vertices. We denote the (finite) support of \(\tau\) by
\[\mathrm{supp}(\tau):=\{v\in\mathbb{Z}^{d}\colon\tau(v)\neq 0\}.\]
We occasionally refer to the \(\ell_{1}\) norm of a shift, \(\|\tau\|_{1}:=\sum_{v\in\mathbb{Z}^{d}}|\tau_{v}|\). The set of all shifts will be denoted by \(\mathcal{S}\).
We define an operation of shifts on coupling fields \(\eta\): first fix an arbitrary choice function \(\iota:E(\mathbb{Z}^{d})\to\mathbb{Z}^{d}\) that chooses for each edge one of its endpoints, i.e., \(\iota(e)\in e\) for every \(e\in E(\mathbb{Z}^{d})\); the shifted coupling field \(\eta^{\tau}\) is defined by shifting the "column of disorders" above a base vertex \(u\) by \(\tau(u)\), and a similar shift up for "columns" above any base edge \(\{u,v\}\) such that \(\iota(\{u,v\})=u\). Precisely, given a shift \(\tau\) and a disorder \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\), define \(\eta^{\tau}\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) as follows: for every \(u\in\mathbb{Z}^{d}\) and \(k\in\mathbb{Z}\),
\[\eta^{\tau}_{\{(u,k),(u,k+1)\}}:=\eta_{\{(u,k+\tau(u)),(u,k+1+\tau(u))\}}, \tag{4.7}\]
and for every \(\{u,v\}\in E(\mathbb{Z}^{d})\) and \(k\in\mathbb{Z}\),
\[\eta^{\tau}_{\{(u,k),(v,k)\}}:=\eta_{\{(u,k+\tau(\iota(\{u,v\}))),(v,k+\tau( \iota(\{u,v\})))\}}. \tag{4.8}\]
Note that if \(\tau(u)=\tau(v)\) for adjacent \(u,v\in\mathbb{Z}^{d}\), then for every \(k\in\mathbb{Z}\),
\[\eta^{\tau}_{\{(u,k),(v,k)\}}=\eta_{\{(u,k+\tau(u)),(v,k+\tau(u))\}}=\eta_{\{( u,k+\tau(v)),(v,k+\tau(v))\}}. \tag{4.9}\]
**Changes to the ground energy.** Of central importance in our arguments will be the change in ground energy induced by shifts of the coupling field. This is captured by the following definition. For each \(\eta\in\mathcal{D}\), finite \(\Lambda\subset\mathbb{Z}^{d}\) and shifts \(\tau,\tau^{\prime}\) we set
\[G^{\eta,\Lambda}(\tau,\tau^{\prime}):=\mathrm{GE}^{\Lambda}(\eta^{\tau^{\prime }})-\mathrm{GE}^{\Lambda}(\eta^{\tau}). \tag{4.10}\]
We also abbreviate
\[G^{\eta,\Lambda}(\tau):=G^{\eta,\Lambda}(\tau,0)=\operatorname{GE}^{\Lambda}( \eta)-\operatorname{GE}^{\Lambda}(\eta^{\tau}). \tag{4.11}\]
With these definitions, for any shifts \(\tau_{1},\dots,\tau_{k}\) we have the telescopic sum
\[G^{\eta,\Lambda}(\tau_{1})=\sum_{i=1}^{k-1}G^{\eta,\Lambda}(\tau_{i},\tau_{i+1 })+G^{\eta,\Lambda}(\tau_{k}). \tag{4.12}\]
#### 4.1.3. Enumeration of shifts, admissible shifts and the maximal energetic change
The counting of various classes of shifts plays an important role in our arguments (the shifts play a role somewhat analogous to that of contours in the classical Peierls argument). To facilitate it, we need a way to succinctly describe shifts. To this end, the following notations regarding a shift \(\tau\) are handy:
* The _total variation_ of \(\tau\) is defined as \[\operatorname{TV}\left(\tau\right):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}|\tau( u)-\tau(v)|.\]
* A _level component_ of a shift \(\tau\) is a connected set on which \(\tau\) is constant and which is not strictly contained in another set with this property (i.e., a connected component of \(\tau^{-1}(k)\) for some \(k\)). Denote the collection of all level components of \(\tau\) by \(\mathcal{LC}(\tau)\).
* A finite sequence \((v_{i})_{i\geq 0}\) of points in \(\mathbb{Z}^{d}\) with \(v_{0}=0\) is a _root sequence_ for a collection \(\mathcal{F}\) of sets in \(\mathbb{Z}^{d}\) if there is a point of \(\{v_{i}\}_{i\geq 0}\) in every set in \(\mathcal{F}\). We further define the _trip entropy_\(R(\tau)\) of the shift \(\tau\) as \[R(\tau):=\min\left\{\sum_{i\geq 1}\|v_{i}-v_{i-1}\|_{1}\colon(v_{i})_{i\geq 0} \text{ is a root sequence for }\mathcal{LC}(\tau)\right\}.\] Similarly, define the trip entropy \(R(E)\) of a set \(E\subseteq\mathbb{Z}^{d}\) as \[R(E):=\min\left\{\sum_{i\geq 1}\|v_{i}-v_{i-1}\|_{1}\colon\begin{subarray}{c }(v_{i})_{i\geq 0}\text{ is a root sequence for the collection}\\ \text{ of connected components of }E\end{subarray}\right\}.\]
These definitions are put to use in estimating the number of shifts in Proposition 6.3.
We next define a restricted class of shifts, depending on \(\eta\), that we term _admissible shifts_ (while restricted, the set of admissible shifts is still defined in a broad enough fashion to contain all the shifts arising in our proof). Very roughly, the class is defined as those shifts whose action on the coupling field induces a sufficiently large energetic change to the ground energy (as defined in (4.11)) to compensate for the number of shifts in the class. Here, the first notion one has for the number of shifts with given parameters is that coming from our later Proposition 6.3. However, we will see later that this notion will need to be further refined in our argument, where we will also need to take care of the number of coarse grainings (and fine grainings) of our shifts. The need to account also for these more refined counting problems lies at the heart of our choice for the definition of root sequence above and the precise definition of admissible shifts below.
Given a coupling field \(\eta\colon E(\mathbb{Z}^{d+1})\to[0,\infty]\), finite \(\Lambda\subset\mathbb{Z}^{d}\) and positive \(\alpha^{\parallel},\alpha^{\perp}\), the class of \((\alpha^{\parallel},\alpha^{\perp})\)-admissible shifts is defined by
\[\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp}):=\left\{\tau \in\mathcal{S}\colon|G^{\eta,\Lambda}(\tau)|\geq\max\left\{\frac{\alpha^{ \perp}}{2}\operatorname{TV}(\tau),\min\{\alpha^{\parallel},\alpha^{\perp}\} \frac{d}{200}R(\tau)\right\}\right\}.\]
Lastly, we give a notation to the maximal change in the ground energy that is induced by an \((\alpha^{\parallel},\alpha^{\perp})\)-admissible shift,
\[\operatorname{MG}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp}):=\sup_{ \tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})}|G^{\eta,\Lambda}(\tau)|. \tag{4.13}\]
Our proof will make use of the fact that \(\operatorname{MG}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})<\infty\), almost surely, under suitable assumptions on the disorder distributions. This is implied by the following lemma, proved in appendix B.
**Lemma 4.2**.: _In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11). Then, for any finite \(\Lambda\subset\mathbb{Z}^{d}\) and positive \(\alpha^{\parallel},\alpha^{\perp}\) we have that \(|\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})|<\infty\) almost surely._
### Stability of the ground energy
We proceed to state our main technical result, Theorem 4.3 below. It gives a quantitative bound on the probability that there exists an admissible shift whose action on the disorder yields a large change in the ground energy.
**Theorem 4.3**.: _There exist constants \(c_{0},c,C>0\) such that the following holds. In the anisotropic disordered ferromagnet, suppose the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfy (1.11). Let \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) be as in definition (1.12). Let \(\underline{\alpha}^{\parallel}\) and \(\underline{\alpha}^{\perp}\) be the minimums of the supports of \(\nu^{\parallel}\) and \(\nu^{\perp}\), as in (1.13). Let \(D=d+1\geq 4\) and suppose that condition (1.14) holds (with the constant \(c_{0}\)). Then the following holds for all finite \(\Lambda\subset\mathbb{Z}^{d}\) and \(t>0\),_
\[\mathbb{P}\left(\operatorname{MG}^{\eta,\Lambda}(\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp})\geq t\right)\leq C\exp\left(-\frac{c}{ \kappa d^{2}}\left(\frac{t}{\underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1 }}\right). \tag{4.14}\]
_Moreover, for small \(t\) we have an improved dependence on dimension: if \(t<\underline{\alpha}^{\perp}2^{d}\) then_
\[\mathbb{P}\left(\operatorname{MG}^{\eta,\Lambda}(\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp})\geq t\right)\leq C\exp\left(-\frac{ct}{ \kappa\underline{\alpha}^{\perp}}\right). \tag{4.15}\]
The theorem will be proven at the end of subsection 5.2.
### Deduction of Theorem 1.7
The following deterministic lemma shows that if the interface of the ground configuration is not flat around the origin then there necessarily exists an admissible shift whose action on the coupling field induces a large change in the ground energy.
**Lemma 4.4**.: _Let \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for some \(\alpha^{\parallel},\alpha^{\perp}>0\) and let \(\Lambda\subset\mathbb{Z}^{d}\) be a finite subset. If \(\sigma^{\eta,\Lambda,\operatorname{Dob}}_{(0,k)}\neq\rho^{\operatorname{Dob}}_ {(0,k)}\) then there exists a shift \(\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\) for which_
\[G^{\eta,\Lambda}(\tau)\geq 2|k|\alpha^{\perp}.\]
The lemma is proved in Section 7.5.
Theorem 1.7 follows as a direct consequence of Lemma 4.4 and Theorem 4.3. First, it suffices to establish the inequality (1.19) at the vertex \(v=0\), since the choice of \(\Lambda\) in Theorem 1.7 is arbitrary. Then, inequality (1.19) with \(v=0\) follows directly by combining Lemma 4.4 with (4.14).
## 5. Coarse and fine grainings of shifts and their use in proving the stability of the ground energy
In this section we take the first step towards proving Theorem 4.3, describing a form of "chaining" argument on the set of admissible shifts which is used to control their effect on the ground energy. The notion of coarse grainings of shifts which lies at the heart of our chaining argument is modelled after a similar graining method for sets which was introduced by Fisher-Frohlich-Spencer [10] in their discussion of the domain walls in the random-field Ising model.
### Coarse and fine grainings of shifts
The chaining argument is based on the notions of coarse and fine grainings of shifts that we now describe.
Given a partition \(\mathcal{P}\) of \(\mathbb{Z}^{d}\) into finite sets and a shift \(\tau\), we write \(\tau_{\mathcal{P}}\) for the shift obtained by averaging the value of \(\tau\) on each partition element of \(\mathcal{P}\) and rounding to the closest integer. Precisely, we set
\[\tau_{\mathcal{P}}(v):=\left[\frac{1}{|P(v)|}\sum_{u\in P(v)}\tau(u)\right] \tag{5.1}\]
where we write \(P(v)\) for the unique partition element of \(\mathcal{P}\) containing \(v\), and where \([a]\) is the rounding of \(a\) to the nearest integer, with the convention \(\left[k+\frac{1}{2}\right]=k\) for \(k\in\mathbb{Z}\).
We make use of two special cases of the above definition:
* Coarse graining: Given an integer \(N\geq 1\), we use the notation \(\tau_{N}:=\tau_{\mathcal{P}_{N}}\) (as in (5.1)), with \(\mathcal{P}_{N}\) is the following partition into discrete cubes of side length \(N\), \[\mathcal{P}_{N}=\{Q_{N}(v)\}_{v\in N\mathbb{Z}^{d}}\quad\text{ where}\quad Q_{N}(v):=v+\{0,1,\ldots,N-1\}^{d}.\]
* Fine graining: Given a subset of the coordinates \(I\subset[d]\), we use the notation \(\tau_{I}:=\tau_{\mathcal{P}_{I}}\) (as in (5.1)), with \(\mathcal{P}_{I}\) is the following partition into discrete boxes with side length \(2\) in the directions in \(I\) and side length \(1\) in the directions in \([d]\setminus I\), \[\mathcal{P}_{I}=\{Q_{I}(v)\}_{v\in(2\mathbb{Z})^{I}\times\mathbb{Z}^{[d] \setminus I}}\quad\text{where}\quad Q_{I}(v):=v+\{0,1\}^{I}\times\{0\}^{[d] \setminus I}.\]
### The chaining argument
We work under the assumptions of Theorem 4.3. Precisely, let the disorder \(\eta\) be sampled as in the anisotropic disordered ferromagnet, with the disorder distributions \(\nu^{\parallel}\) and \(\nu^{\perp}\) satisfying (1.11). Let \(D\geq 4\) and suppose that condition (1.14) holds with a constant \(c>0\) chosen sufficiently small for the arguments below.
Fix a finite \(\Lambda\subset\mathbb{Z}^{d}\). For brevity, we will write \(G\) for \(G^{\eta,\Lambda}\), \(\mathcal{AS}\) for \(\mathcal{AS}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{\alpha}^ {\perp})\) (recall (1.13)), _admissible_ for \((\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp})\)-_admissible_ and MG for \(\operatorname{MG}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{ \alpha}^{\perp})\). First, since \(\operatorname{MG}<\infty\) almost surely by Lemma 4.2, we have
\[\mathbb{P}(\operatorname{MG}\geq t)=\sum_{k=0}^{\infty}\mathbb{P}\left( \operatorname{MG}\in[t2^{k},t2^{k+1})\right).\]
Next, for any \(s>0\), integer \(K\geq 1\), integer \(1\leq r\leq d\), any positive \((\gamma_{j})_{j\in[K]\cup\{(0,r),(r,1)\}}\) with \(\gamma_{(0,r)}+\gamma_{(r,1)}+\sum_{1\leq k\leq K}\gamma_{k}\leq 1\) and any function \(I_{\tau}\) which assigns a subset of \([d]\) of size \(r\) to each shift \(\tau\), we have the chaining argument (noting that the supremum is realized in (4.13)
due to Lemma 4.2, and also recalling (4.12))
\[\mathbb{P}(\,\mathrm{MG}\in[s,2s))=\mathbb{P}\left(\{\mathrm{MG}\leq 2s \}\cap\left\{\max_{\tau\in\mathcal{AS}}|G(\tau)|\geq s\right\}\right)\] \[= \mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau,\tau_{I_{\tau}})+G(\tau_{I_{\tau}},\tau_{2})+\sum_{k=1}^{K -1}G(\tau_{2^{k}},\tau_{2^{k+1}})+G(\tau_{2^{K}})|\geq s\right\}\right)\] \[\leq \mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau,\tau_{I_{\tau}})|\geq\gamma_{(0,|I_{\tau}|)}s\right\}\right) \tag{5.2a}\] \[+\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau_{I_{\tau}},\tau_{2})|\geq\gamma_{(|I_{\tau}|,1)}s\right\}\right)\] (5.2b) \[+\sum_{k=1}^{K-1}\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap \left\{\max_{\tau\in\mathcal{AS}}|G(\tau_{2^{k}},\tau_{2^{k+1}})|\geq\gamma_{ k}s\right\}\right)\] (5.2c) \[+\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\max_{\tau\in \mathcal{AS}}|G(\tau_{2^{K}})|\geq\gamma_{K}s\right\}\right). \tag{5.2d}\]
The following notion will become useful for bounding the terms (5.2a) and (5.2b) from above. A set of indices \(I\subseteq[d]\) will be called _compatible_ with a shift \(\tau\) if it holds that
\[\mathrm{TV}(\tau_{I})\leq 20(2|I|+1)\,\mathrm{TV}(\tau)\qquad\text{and} \qquad\|\tau_{I}-\tau\|_{1}\leq\frac{4|I|}{d}\,\mathrm{TV}(\tau).\]
Denote
\[\mathrm{comp}(\tau):=\left\{I\subset[d]\colon I\text{ is compatible with }\tau\right\}.\]
The following proposition is proved in Section 6.2.2.
**Proposition 5.1**.: _Let \(\tau\) be a shift. For each \(0\leq r\leq d\) there exists \(I\in\mathrm{comp}(\tau)\) with \(|I|=r\)._
It is clear that sufficiently coarse grainings of a shift will yield the identity (all zero) shift. The following proposition, proved in Section 6.2.1, quantifies this statement.
**Proposition 5.2**.: _Let \(\tau\) be a shift. For each integer \(N>\sqrt[d]{2}\left(\frac{\mathrm{TV}(\tau)}{2d}\right)^{\frac{1}{d-1}}\) it holds that \(\tau_{N}\equiv 0\)._
The next lemma, whose proof will be the focus of section 6 allows to estimate the expressions (5.2a), (5.2b) and (5.2c).
**Lemma 5.3**.: _Define \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) as in (1.12):_
\[\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d):=\left(\frac{1}{\underline{\alpha ^{\parallel}\alpha^{\perp}}}+\frac{1}{d(\underline{\alpha^{\perp}})^{2}} \right)\mathrm{wid}(\nu^{\parallel})^{2}+\frac{1}{(\underline{\alpha^{\perp}} )^{2}}\,\mathrm{wid}(\nu^{\perp})^{2}.\]
_There exist universal constants \(C,c>0\) such that the following hold for every \(s>0\)._
1. _For any_ \(1\leq r\leq d\)_, any map_ \(\tau\mapsto I_{\tau}\) _assigning to each shift_ \(\tau\) _a compatible set_ \(I_{\tau}\in\mathrm{comp}(\tau)\) _with_ \(|I|=r\)_, and_ \(Cr\kappa\frac{\log d}{d}\left(1+\frac{\alpha^{\perp}}{\underline{\alpha^{ \parallel}}}\right)\leq\Gamma\leq 1\)_,_ \[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\left\{\exists\tau\in\mathcal{AS}\colon|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\right\}\right)\leq C\exp\left(-c\frac{\Gamma} {\kappa\underline{\alpha^{\perp}}r}s\right).\]
2. _For any_ \(1\leq r\leq d\)_, any map_ \(\tau\mapsto I_{\tau}\) _map assigning to each shift_ \(\tau\) _a compatible set_ \(I_{\tau}\in\operatorname{comp}(\tau)\) _with_ \(|I|=r\)_, and_ \(C\kappa\frac{dr}{2^{r}}\left(r+\log\left(\frac{\underline{\alpha}^{\perp}}{ \underline{\alpha}^{\perp}dr}+1\right)\right)\leq\Gamma\leq 1\)_,_ \[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\left\{\exists\tau\in\mathcal{A} \mathcal{S}\colon|G(\tau_{I_{\tau}},\tau_{2})|>\sqrt{\Gamma}s\right\}\right) \leq C\exp\left(-c\frac{\Gamma}{\kappa\underline{\alpha}^{\perp}d}s\right).\]
3. _For any_ \(k\geq 1\) _and_ \(C\kappa\frac{d^{3}}{2^{k(d-2)}}\left(dk+\log\left(\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\perp}d^{2}}+1\right)\right)\leq\Gamma\leq 1\)_,_ \[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\left\{\exists\tau\in\mathcal{A} \mathcal{S}\colon|G(\tau_{2^{k}},\tau_{2^{k+1}})|\geq\sqrt{\Gamma}s\right\} \right) \leq C\exp\left(-c\frac{\Gamma}{\kappa\underline{\alpha}^{\perp}d^{2}2^{k}}s \right).\]
For small \(s\), we may obtain an improved dependence on the dimension \(d\), using the following lemma.
**Lemma 5.4**.: _There exists universal constants \(C,c>0\) such that the following holds. Assume \(0<s<\underline{\alpha}^{\perp}4^{d}\), and let \(\kappa=\kappa(\nu^{\parallel},\nu^{\perp},d)\) be as in (1.12), then_
\[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{A} \mathcal{S}\colon|G(\tau)|>s\}\right)\leq C\exp\left(-\frac{c}{\kappa \underline{\alpha}^{\perp}}s\right). \tag{5.3}\]
Proof of Theorem 4.3.: Throughout the proof we will use \(C\) and \(c\) to denote positive absolute constants; the values of these constants will be allowed to change from line to line, even within the same calculation, with the value of \(C\) increasing and the value of \(c\) decreasing.
Set \(K:=\lceil\frac{1}{d-1}\log_{2}\left(\frac{4s}{\underline{\alpha}^{\perp}d} \right)\rceil+1\). By Proposition 5.2 and the definition of admissibility, the term (5.2d) vanishes for any choice of \(\gamma_{K}\).
Set \(\gamma_{(0,|I|)}=\gamma_{(|I|,1)}:=\frac{1}{4}\) and \(\gamma_{k}:=\gamma\,2^{-\frac{1}{4}\min\{k,K-k\}}\) for any \(1\leq k\leq K-1\), where \(\gamma=\left(2\sum_{k=1}^{K-1}2^{-\frac{1}{4}\min\{k,K-k\}}\right)^{-1}\). Set \(r:=\lceil\min\{10\log_{2}d,\frac{d}{2}\}\rceil\) and a map \(\tau\mapsto I_{\tau}\) assigning to each shift \(\tau\) a compatible set \(I_{\tau}\in\operatorname{comp}(\tau)\) of size \(r\), existing by Proposition 5.1. Notice that \(\gamma_{(0,r)}+\gamma_{(r,1)}+\sum_{k=1}^{K-1}\gamma_{k}=1\) and that \(1\leq r\leq d\) so one may use the chaining argument (5.2).
Recall that \(\kappa\left(1+\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{\perp}} \right)\leq c_{0}\frac{d+1}{\log^{2}(d+1)}\), by assumption (1.14) on the noise distributions. It is easy to verify that for \(c_{0}\) sufficiently small, this enables us to use the first part of Lemma 5.3 to bound the term (5.2a), the second part of Lemma 5.3 to bound the term (5.2b), and the third part of Lemma 5.3 to bound the term (5.2c) (recall that the term (5.2d) vanishes).
This yields that for every positive \(s\),
\[\mathbb{P}(\operatorname{MG}\in[s,2s))\leq C\exp\left(-c\frac{s}{\kappa\underline{\alpha}^{\perp}r}\right)+C \exp\left(-c\frac{s}{\kappa\underline{\alpha}^{\perp}d}\right)+C\sum_{k=1}^{ \lceil\frac{K}{2}\rceil}\exp\left(-c\frac{s}{\kappa\underline{\alpha}^{\perp} d^{2}2^{\frac{3}{2}k}}\right)\] \[+C\sum_{k=\lceil\frac{K}{2}\rceil+1}^{K-1}\exp\left(-c\frac{s}{ \kappa\underline{\alpha}^{\perp}d^{2}2^{K}}\right)\]
Now, noticing that the \(K-\lceil\frac{K}{2}\rceil-1\) last summands are asymptotically dominant and that \(2^{K}\leq 4\left(\frac{4s}{\underline{\alpha}^{\perp}d}\right)^{\frac{1}{d-1}}\), one gets the bound
\[\mathbb{P}(\operatorname{MG}\in[s,2s))\leq C\exp\left(-\frac{c}{\kappa d^{2}} \left(\frac{s}{\underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right)\]
for every positive integer \(s\). Hence,
\[\mathbb{P}(\mathrm{MG}\geq t) =\sum_{i=0}^{\infty}\mathbb{P}(\mathrm{MG}\in[2^{i}t,2^{i+1}t))\leq \sum_{i=0}^{\infty}C\exp\left(-\frac{c}{\kappa d^{2}}\left(\frac{2^{i}t}{ \underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right)\] \[\leq C\exp\left(-\frac{c}{\kappa d^{2}}\left(\frac{t}{\underline {\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right).\]
For \(t<\underline{\alpha}^{\perp}2^{d}\), by Lemma 5.4 and (4.14),
\[\mathbb{P}(\mathrm{MG}\geq t) =\sum_{i=0}^{d-1}\mathbb{P}\left(\mathrm{MG}\in\left[2^{i}t,2^{i+ 1}t\right)\right)+\mathbb{P}\left(\mathrm{MG}\geq 2^{d}t\right)\] \[\leq\sum_{i=0}^{d-1}C\exp\left(-\frac{c}{\kappa\underline{\alpha }^{\perp}}2^{i}t\right)+C\exp\left(-\frac{c}{\kappa d^{2}}\left(\frac{2^{d}t}{ \underline{\alpha}^{\perp}}\right)^{\frac{d-2}{d-1}}\right)\leq C\exp\left(- \frac{ct}{\kappa\underline{\alpha}^{\perp}}\right).\ \square\]
## 6. Concentration of ground-energy differences between consecutive grainings
The goal of this section is to prove Proposition 5.1, Proposition 5.2, Lemma 5.3 and Lemma 5.4. The proofs of Lemma 5.3 and Lemma 5.4 are achieved via the following pivotal statements.
**Interface layering.** In the following lemmas, the concept of interface layering plays a significant role. By such layering, we mean the number of interface plaquettes (in a ground configuration with Dobrushin boundary conditions) lying above a given position in the base plane (i.e., having the same projection). We use the following definitions: The _parallel layering_ of a configuration \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\) over a set \(A\subset\Lambda\subset\mathbb{Z}^{d}\) is defined as
\[\mathcal{L}_{A}^{\parallel}(\sigma):=\left|\left\{\left\{x,x+e_{d+1}\right\} \in E^{\parallel}(\mathbb{Z}^{d+1})\colon\pi\left(x\right)\in A,\,\sigma_{x} \neq\sigma_{x+e_{d+1}}\right\}\right|. \tag{6.1}\]
The _perpendicular layering_ of \(\sigma\) over \(A\) is defined as
\[\mathcal{L}_{A}^{\perp}(\sigma):=\left|\left\{\left\{x,y\right\}\in E^{\perp} (\mathbb{Z}^{d+1})\colon\pi\left(x\right)\in A,\,\pi\left(y\right)\in A,\, \sigma_{x}\neq\sigma_{y}\right\}\right|. \tag{6.2}\]
With these definitions in mind one may think of \(\mathcal{L}_{A}^{\perp}(\tau)+\mathcal{L}_{A}^{\parallel}(\tau)-|A|\) as the number of excessive plaquettes in the interface created by the minimal energy configuration above \(A\), compared to the interface of the configuration \(\rho^{\mathrm{Dob}}\).
For \(A\subset\Lambda\subset\mathbb{Z}^{d}\) and integer \(b^{\parallel},b^{\perp}\geq 0\), define:
\[\Omega^{\Lambda,A,(b^{\parallel},b^{\perp})}:=\left\{\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\colon\sum_{\begin{subarray}{c}\{x,y\}\in E^{\theta}(2^{d+1})\\ \{\pi\left(x\right),\pi\left(y\right)\}\cap A\neq\emptyset\end{subarray}}1_{ \sigma_{x}\neq\sigma_{y}}\leq b^{\theta}\text{ for }\theta\in\left\{\left\|,\perp \right\}\right\},\]
as well as
\[\mathrm{GE}^{\Lambda,A,(b^{\parallel},b^{\perp})}(\eta):=\min\left\{\mathcal{H }^{\eta,\Lambda}(\sigma)\colon\sigma\in\Omega^{\Lambda,A,(b^{\parallel},b^{ \perp})}\right\}. \tag{6.3}\]
When \(\Lambda\) is fixed, we will occasionally abbreviate by omitting it. Also define
\[G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau,\tau^{\prime}):=\mathrm{GE}^{ \Lambda,\mathrm{supp}(\tau-\tau^{\prime}),(b^{\parallel},b^{\perp})}(\eta^{ \tau^{\prime}})-\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau-\tau^{\prime}),(b^{ \parallel},b^{\perp})}(\eta^{\tau}),\]
and abbreviate
\[G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau):=G^{\eta,\Lambda,(b^{\parallel},b ^{\perp})}(\tau,0)=\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)-\operatorname{GE}^{\Lambda,\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta^{\tau}).\]
**Concentration of ground energy differences.** First, we provide a bound on the probability of a given shift producing a large energetic gain, given some a priori bound on the "number of excessive faces in the interface" above the support of the shift.
**Lemma 6.1**.: _There exist universal \(C,c>0\) such that the following holds. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfy (1.11). Then for any two shifts \(\tau,\tau^{\prime}\) and any non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau-\tau^{ \prime}),(b^{\parallel},b^{\perp})}\),_
\[\mathbb{P}\left(|G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau,\tau^{\prime })|\geq t\right)\leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{ \parallel})^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}} \right). \tag{6.4}\]
**Layering bounds.** Lemma 6.1 provides a concentration estimate for the ground energy of a restricted set of configurations. In the following lemma, we show that at each step of the graining, the non-restricted ground energy coincides with an appropriate restricted ground energy.
**Lemma 6.2**.: _There exists a universal \(C>0\) such that the following holds. Let \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for some \(\alpha^{\parallel},\alpha^{\perp}>0\). Let \(\Lambda\subset\mathbb{Z}^{d}\) be a finite subset. Let \(s>0\) such that \(\operatorname{MG}(\alpha^{\parallel},\alpha^{\perp})\leq 2s\), and let \(\tau\) be an \((\alpha^{\parallel},\alpha^{\perp})\)-admissible shift._
1. _For any_ \(\emptyset\neq I\in\operatorname{comp}(\tau)\)_,_ \[\operatorname{GE}^{\Lambda}(\#) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau-\tau_{I}),(b ^{\parallel}_{(0,|I|)}(s),b^{\perp}_{(0,|I|)}(s))}(\#) \text{for }\ \#\in\{\eta^{\tau},\eta^{\tau_{I}}\},\] \[\operatorname{GE}^{\Lambda}(\#) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau_{2}-\tau_{ I}),(b^{\parallel}_{(|I|,1)}(s),b^{\perp}_{(|I|,1)}(s))}(\#) \text{for }\ \#\in\{\eta^{\tau_{2}},\eta^{\tau_{I}}\},\] _where_ \[b^{\parallel}_{(0,|I|)}(s):=C\left(\frac{1}{\alpha^{\parallel}}+ \frac{1}{\alpha^{\perp}d}\right)|I|s, b^{\perp}_{(0,|I|)}(s):=\frac{C}{\alpha^{\perp}}|I|s,\] \[b^{\parallel}_{(|I|,1)}(s):=C\left(\frac{1}{\alpha^{\parallel}}+ \frac{1}{\alpha^{\perp}d}\right)ds, b^{\perp}_{(|I|,1)}(s):=\frac{C}{\alpha^{\perp}}ds.\]
2. _For any_ \(k\geq 1\)_,_ \[\operatorname{GE}^{\Lambda}(\eta^{\tau_{2k}}) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau_{2k}-\tau_ {2k+1}),(b^{\parallel}_{k}(s),b^{\perp}_{k}(s))}(\eta^{\tau_{2k}})\] \[\stackrel{{(k\geq 2)}}{{=}}\operatorname{GE}^{ \Lambda,\operatorname{supp}(\tau_{2k-1}-\tau_{2k}),(b^{\parallel}_{k-1},b^{ \perp}_{k-1})}(\eta^{\tau_{2k}}),\] _where_ \[b^{\parallel}_{k}(s):=C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{ \perp}d}\right)d^{2}2^{k}s, b^{\perp}_{k}(s):=\frac{C}{\alpha^{\perp}}d^{2}2^{k}s.\]
3. _For_ \(s<\alpha^{\perp}4^{d}\) _it holds that_ \[\operatorname{GE}^{\Lambda}(\#) =\operatorname{GE}^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\#) \text{for }\ \#\in\{\eta^{\tau},\eta\},\] _where_ \[b^{\parallel}(s):=C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{\perp }d}\right)s, b^{\perp}(s):=\frac{C}{\alpha^{\perp}}s.\]
**Enumeration of shifts.** Lastly, we provide a bound on the number of shifts and their coarse/fine grainings, satisfying that their total variation and trip entropy are bounded by given constants. This is done by the following proposition and the corollary that follows it.
**Proposition 6.3** (Counting shift functions).: _There exists \(C>0\) such that for each \(\lambda,\rho>0\),_
\[|\{\tau\in\mathcal{S}\colon\,\mathrm{TV}(\tau)\leq\lambda,\,R(\tau)\leq\rho\}| \leq\exp\left(C\min\left\{\lambda+\lambda\log\left(\frac{\rho}{\lambda}+1 \right),\lambda\frac{\log d}{d}+\rho\log d\right\}\right),\]
**Corollary 6.4**.: _There exists a universal \(C>0\) such that the following holds._
1. _For each integer_ \(N\geq 2\) _and_ \(\lambda,\rho>0\)_,_ \[|\{\tau_{N}\colon\tau\in\mathcal{S},\,\mathrm{TV}(\tau)\leq \lambda,R(\tau)\leq\rho\}|\\ \leq\exp\left(C\frac{d\lambda}{N^{d-1}}\left(d\log N+\log\left( \frac{\rho}{d\lambda}+1\right)\right)\right).\] (6.5)
2. _For each integer_ \(1\leq r\leq d\)_, a mapping_ \(\tau\mapsto I_{\tau}\) _such that_ \(I_{\tau}\in\mathrm{comp}(\tau)\) _and_ \(|I_{\tau}|=r\)_, and_ \(\lambda,\rho>0\)_,_ (6.6)
Lemma 6.1 will be proved in subsection 6.4, using a concentration estimate. Lemma 6.2 will be proved in section 6.5, requiring both basic properties of grainings to be established as well as using Lemmas inspired by Dobrushin's work. Proposition 6.3 and Corollary 6.4 will be proved in section 6.3 using the work of Bollobas-Balister [1] (continuing on Lebowitz-Mazel [12]).
### Proof of Lemmas 5.3 and 5.4
In this section we show how Lemma 6.1, Lemma 6.2, Proposition 6.3 and Corollary 6.4 imply Lemmas 5.3 and 5.4. We will continue to use the abbreviations of section 5, specifically \(G\) for \(G^{\eta,\Lambda}\), \(\mathcal{AS}\) for \(\mathcal{AS}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{\alpha}^ {\perp})\) and MG for \(\mathrm{MG}^{\eta,\Lambda}(\underline{\alpha}^{\parallel},\underline{\alpha}^ {\perp})\). Throughout this section we will use \(C\) and \(c\) to denote positive absolute constants; the values of these constants will be allowed to change from line to line, even within the same calculation, with the value of \(C\) increasing and the value of \(c\) decreasing.
In the proofs of Lemmas 5.3 and 5.4 we will use the following corollary of Proposition 6.3 and Corollary 6.4.
**Corollary 6.5**.: _The following bounds hold._
1. _For every_ \(t>0\)_,_ \[|\{\tau\in\mathcal{AS}\colon G(\tau)\leq t\}|\leq\exp\left(C\frac{(\log d)t}{ \min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}d}\right).\] (6.7)
2. _For every integer_ \(N\geq 2\) _and_ \(t>0\)_,_ (6.8)
3. _For every integer_ \(1\leq r\leq d\)_, a mapping_ \(\tau\mapsto I_{\tau}\) _such that_ \(I_{\tau}\in\mathrm{comp}(\tau)\) _and_ \(|I_{\tau}|=r\)_, and_ \(t>0\)_,_ (6.9)
Proof.: For every \(t>0\),
\[\{\tau\in\mathcal{AS}\colon G(\tau)\leq t\}\subseteq\left\{\tau\in\mathcal{S} \colon\operatorname{TV}(\tau)\leq\frac{2t}{\underline{\alpha}^{\perp}},\,R(\tau) \leq\frac{200t}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{ \perp}\}d}\right\}.\]
Hence, by Proposition 6.3,
\[|\{\tau\in\mathcal{AS}\colon G(\tau)\leq t\}|\leq\left|\left\{ \tau\in\mathcal{S}\colon\operatorname{TV}(\tau)\leq\frac{2t}{\underline{ \alpha}^{\perp}},\,R(\tau)\leq\frac{200t}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}d}\right\}\right|\] \[\quad\leq\exp\left(C\min\left\{\frac{2t}{\underline{\alpha}^{ \perp}}+\frac{2t}{\underline{\alpha}^{\perp}}\log\left(\frac{100\underline{ \alpha}^{\perp}}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{ \perp}\}d}+1\right),\frac{2t}{\underline{\alpha}^{\perp}d}\log d+\frac{200t}{ \min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}d}\log d, \right\}\right)\] \[\quad\leq\exp\left(C\frac{(\log d)t}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}\right),\]
for every integer \(N\geq 2\), by (6.5),
\[|\{\tau_{N}\colon\tau\in\mathcal{AS},\,G(\tau)\leq t\}| \leq\left|\left\{\tau_{N}\colon\tau\in\mathcal{S},\,\operatorname{ TV}(\tau)\leq\frac{2t}{\underline{\alpha}^{\perp}},\,R(\tau)\leq\frac{200t}{ \min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{\perp}\}}d\right\}\right|\] \[\leq\exp\left(C\frac{2t\,t}{\underline{\alpha}^{\perp}N^{d-1}} \left(d\log N+\log\left(\frac{100}{d^{2}}\left(1+\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\parallel}}\right)+1\right)\right)\right)\] \[\leq\exp\left(C\frac{d\,t}{\underline{\alpha}^{\perp}N^{d-1}} \left(d\log N+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^ {\parallel}d^{2}}+1\right)\right)\right),\]
and for every integer \(1\leq r\leq d\) and mapping \(\tau\mapsto I_{\tau}\) such that \(I_{\tau}\in\operatorname{comp}(\tau)\) and \(|I_{\tau}|=r\), by (6.6),
\[|\{\tau_{I_{\tau}}\colon\tau\in\mathcal{AS},\,G(\tau)\leq t\}| \leq\left|\left\{\tau_{I_{\tau}}\colon\tau\in\mathcal{S},\, \operatorname{TV}(\tau)\leq\frac{2t}{\underline{\alpha}^{\perp}},\,R(\tau) \leq\frac{200t}{\min\{\underline{\alpha}^{\parallel},\underline{\alpha}^{ \perp}\}d}\right\}\right|\] \[\leq\exp\left(C\frac{2rt}{\underline{\alpha}^{\perp}2r}\left(r+ \log\left(\frac{100\underline{\alpha}^{\perp}}{\underline{\alpha}^{\parallel} dr}+1\right)\right)\right)\] \[\leq\exp\left(C\frac{rt}{\underline{\alpha}^{\perp}2r}\left(r+ \log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{\parallel} dr}+1\right)\right)\right).\qed\]
Proof of Lemma 5.3.: For the first part of the Lemma, let \(1\leq r\leq d\), \(\tau\mapsto I_{\tau}\) a map assigning to each shift \(\tau\) a compatible set \(I_{\tau}\in\operatorname{comp}(\tau)\) with \(|I_{\tau}|=r\), and \(Cr\kappa\frac{\log d}{d}\left(1+\frac{\underline{\alpha}^{\perp}}{\underline {\alpha}^{\parallel}}\right)\leq\Gamma\leq 1\). It holds that
\[\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\{\exists\tau \in\mathcal{AS},|G(\tau,I_{\tau})|>\sqrt{\Gamma}s\}\right)\] \[\quad=\mathbb{P}\left(\{\operatorname{MG}\leq 2s\}\cap\{\exists \tau\in\mathcal{AS},|G^{\eta,\Lambda,(b^{\parallel}_{(0,r)}(t),b^{\perp}_{(0,r )}(t))}(\tau,\tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\] \[\quad\leq C\exp\left(C\frac{(\log d)s}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}\right)\exp\left(-c\frac{\Gamma s^{2} }{\operatorname{Lip}(\nu^{\parallel})^{2}b^{\parallel}_{(0,r)}+\operatorname{ Lip}(\nu^{\perp})^{2}b^{\perp}_{(0,r)}}\right)\] \[\quad=C\exp\left(\left(\frac{C\log d}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}-c\frac{\Gamma}{\kappa\underline{ \alpha}^{\perp}r}\right)s\right),\]
with the first equality by Lemma 6.2, the first inequality by union bound, (6.7), and Lemma 6.1, and the second equality by the definition of \(b^{\parallel}_{(0,r)},b^{\perp}_{(0,r)}\) and of \(\kappa\). Now, for \(C\) sufficiently
large, if
\[\Gamma\geq Cr\kappa\frac{\log d}{d}\left(1+\frac{\underline{\alpha}^{\perp}}{ \underline{\alpha}^{\parallel}}\right)\]
then the second term in the exponent is the asymptotically dominant one and one gets
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\leq C\exp\left(-c\frac{\Gamma s}{ \kappa\underline{\alpha}^{\perp}r}\right).\]
In an identical manner, it holds that
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G(\tau_{I_{\tau}},\tau_{2})|>\sqrt{\Gamma}s\}\right)\] \[=\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G^{\eta,\Lambda,(b_{(r,1)}^{\parallel}(t),b_{(r,1)}^{\perp}(t)) }(\tau_{I_{\tau}},\tau_{2})|>\sqrt{\Gamma}s\}\right)\] \[\leq C\exp\left(C\frac{rs}{\underline{\alpha}^{\perp}2^{r}} \left(r+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}dr}+1\right)\right)\right)\exp\left(-c\frac{\Gamma s^{2}}{\mathrm{ Lip}(\nu^{\parallel})^{2}b_{(r,1)}^{\parallel}+\mathrm{Lip}(\nu^{\perp})^{2}b_{(r,1)}^{ \perp}}\right)\] \[=C\exp\left(\left(C\frac{r}{\underline{\alpha}^{\perp}2^{r}} \left(r+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}dr}+1\right)\right)-c\frac{\Gamma}{\kappa\underline{\alpha}^{\perp}d }\right)s\right),\]
with the first equality by Lemma 6.2, the first inequality by union bound, (6.8), and Lemma 6.1, and the second equality by the definition of \(b_{(r,1)}^{\parallel},b_{(r,1)}^{\perp}\) and of \(\kappa\). Now, for \(C\) sufficiently large, if
\[\Gamma\geq C\kappa\frac{dr}{2^{r}}\left(r+\log\left(\frac{\underline{\alpha}^{ \perp}}{\underline{\alpha}^{\parallel}dr}+1\right)\right)\]
then the second term in the exponent is the asymptotically dominant one and one gets
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\leq\exp C\left(-c\frac{\Gamma s}{ \kappa\underline{\alpha}^{\perp}d}\right).\]
For the third bound, again in an identical manner
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G(\tau_{2^{k}},\tau_{2^{k+1}})|>\sqrt{\Gamma}s\}\right)\] \[=\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G^{\eta,\Lambda,(b_{k}^{\parallel}(t),b_{k}^{\perp}(t))}(\tau_{ 2^{k}},\tau_{2^{k+1}})|>\sqrt{\Gamma}s\}\right)\] \[\leq C\exp\left(C\frac{ds}{\underline{\alpha}^{\perp}2^{k(d-1)}} \left(dk+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}d^{2}}+1\right)\right)\right)\exp\left(-c\frac{\Gamma s^{2}}{\mathrm{ Lip}(\nu^{\parallel})^{2}b_{k}^{\parallel}+\mathrm{Lip}(\nu^{\perp})^{2}b_{k}^{ \perp}}\right)\] \[=C\exp\left(\left(C\frac{d}{\underline{\alpha}^{\perp}2^{k(d-1)} }\left(dk+\log\left(\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}d^{2}}+1\right)\right)-c\frac{\Gamma}{\kappa\underline{\alpha}^{ \perp}d^{2}2^{k}}\right)s\right),\]
with the first equality by Lemma 6.2, the first inequality by union bound, (6.9), and Lemma 6.1, and the second equality by the definition of \(b_{k}^{\parallel},b_{k}^{\perp}\) and of \(\kappa\). For \(C>0\) sufficiently large, if
\[\Gamma\geq C\kappa\frac{d^{3}}{2^{k(d-2)}}\left(dk+\log\left(\frac{\underline{ \alpha}^{\perp}}{\underline{\alpha}^{\parallel}d^{2}}+1\right)\right)\]
then the second term in the exponent is the asymptotically dominant one and one gets
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G(\tau, \tau_{I_{\tau}})|>\sqrt{\Gamma}s\}\right)\leq C\exp\left(-c\frac{\Gamma s}{ \kappa\underline{\alpha}^{\perp}d^{2}2^{k}}\right).\]
This concludes the proof.
Proof of Lemma 5.4.: It holds that
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G(\tau)|>s\}\right)\] \[\quad=\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in \mathcal{AS},|G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)|>s\}\right)\] \[\quad\leq\exp\left(C\frac{(\log d)s}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}\right)C\exp\left(-\frac{s^{2}}{ \mathrm{Lip}(\nu^{\parallel})^{2}b^{\parallel}+\mathrm{Lip}(\nu^{\perp})^{2}b^ {\perp}}\right)\] \[\quad=C\exp\left(\left(\frac{C\log d}{\min\{\underline{\alpha}^{ \parallel},\underline{\alpha}^{\perp}\}d}-\frac{c}{\kappa\underline{\alpha}^{ \perp}}\right)s\right),\]
with the first equality by Lemma 6.2, the first inequality by union bound, (6.7), and Lemma 6.1, and the second equality by the definition of \(b^{\parallel},b^{\perp}\) and of \(\kappa\). Then, for \(c>0\) sufficiently small, if \(\kappa\left(1+\frac{\underline{\alpha}^{\perp}}{\underline{\alpha}^{ \parallel}}\right)\leq c\frac{d}{\log d}\) which is always the case by condition (1.14), then the second term in the exponent is the asymptotically dominant one and one gets
\[\mathbb{P}\left(\{\mathrm{MG}\leq 2s\}\cap\{\exists\tau\in\mathcal{AS},|G( \tau)|>s\}\right)\leq\exp\left(-\frac{c}{\kappa\underline{\alpha}^{\perp}}s \right).\qed\]
### Basic properties of grainings of shifts
This section provides estimates on basic parameters (total variation, support size, trip-entropy, etc.) for coarse and fine grainings of shifts. These estimates will be used in the proof of several of our preliminary statements in the subsequent sections.
#### 6.2.1. Bounding the total variation and weighted difference of coarse grainings
In this section we will prove Proposition 5.2 as well as the following useful bounds.
**Proposition 6.6**.: _For every shift \(\tau\) and every positive integer \(N\),_
\[\mathrm{TV}(\tau_{N})\leq 10d\ \mathrm{TV}(\tau).\]
**Proposition 6.7**.: _For every shift \(\tau\),_
\[|\mathrm{supp}(\tau_{2}-\tau)|\leq\|\tau_{2}-\tau\|_{1}\leq 4\ \mathrm{TV}(\tau)\]
_and moreover, for every positive integer \(N\),_
\[|\mathrm{supp}(\tau_{2N}-\tau_{N})|\leq\|\tau_{2N}-\tau_{N}\|_{1}\leq(4d+9)N \ \mathrm{TV}(\tau).\]
The proofs of Propositions 5.2, 6.6 and 6.7 will rely on several lemmas.
**Lemma 6.8** (Isoperimetric inequality on \(\mathbb{Z}^{d}\)).: _Let \(A\) be a finite set of points in \(\mathbb{Z}^{d}\). Then,_
\[|\partial A|\geq 2d|A|^{1-\frac{1}{d}}. \tag{6.10}\]
_Moreover, for every \(N\times N\times\cdots\times N\)\(d\)-dimensional cube \(B\) in \(\mathbb{Z}^{d}\),_
\[|\partial A\cap(B\times B)|\geq\frac{2}{3N}\min\left\{|A\cap B|,|B\setminus A| \right\}. \tag{6.11}\]
Proof.: For any \(S\subset\mathbb{Z}^{d}\) and \(1\leq i\leq d\), let \(\pi_{i}(S)\) be the projection of \(S\) on the hyperplane spanned by \(\{e_{1},e_{2},\ldots,e_{d}\}\setminus\{e_{i}\}\).
Recall that the Loomis-Whitney inequality [10] states that for every finite set \(S\) of points in \(\mathbb{Z}^{d}\) it holds that \(\prod_{i=1}^{d}\lvert\pi_{i}(S)\rvert\geq\lvert S\rvert^{d-1}\) and hence, by the AM-GM inequality,
\[\sum_{i=1}^{d}\lvert\pi_{i}(S)\rvert\geq d\lvert S\rvert^{1-\frac{1}{d}}. \tag{6.12}\]
For every \(1\leq i\leq d\), let \(\partial_{i}A:=\{(u,v)\in\partial A\colon u-v\in\{-e_{i},e_{i}\}\}\). Obviously, \(\lvert\partial_{i}A\rvert\geq 2\lvert\pi_{i}(A)\rvert\) for every \(1\leq i\leq d\), and hence,
\[\lvert\partial A\rvert=\sum_{i=1}^{d}\lvert\partial_{i}A\rvert\geq 2\sum_{i=1}^ {d}\lvert\pi_{i}(A)\rvert\geq 2d\lvert A\rvert^{1-\frac{1}{d}}.\]
We proceed to prove (6.11). Since \(\partial A\cap(B\times B)\rvert=\partial(B\setminus A)\cap(B\times B)\), we may assume with no loss of generality that \(\lvert A\cap B\rvert\leq\frac{1}{2}\lvert B\rvert\). For every \(1\leq i\leq d\), let
\[F_{i}:=\left\{x\in\pi_{i}(A\cap B)\colon\lvert A\cap B\cap\pi_{i}^{-1}(x) \rvert=\frac{\lvert B\rvert}{\lvert\pi_{i}(B)\rvert}\right\}.\]
For every \(x\in\pi_{i}(A\cap B)\setminus F_{i}\) it holds that \(\lvert\partial_{i}A\cap(B\times B)\cap(\pi_{i}^{-1}(x)\times\pi_{i}^{-1}(x)) \rvert\geq 1\). Hence,
\[\lvert\partial_{i}A\cap(B\times B)\rvert=\sum_{x\in\pi_{i}(A\cap B)}\lvert \partial_{i}A\cap(B\times B)\cap(\pi_{i}^{-1}(x)\times\pi_{i}^{-1}(x))\rvert \geq\lvert\pi_{i}(A\cap B)\rvert-\lvert F_{i}\rvert,\]
and since
\[\lvert A\cap B\rvert\geq\sum_{x\in F_{i}}\lvert A\cap B\cap\pi_{i}^{-1}(x) \rvert=\lvert F_{i}\rvert\frac{\lvert B\rvert}{\lvert\pi_{i}(B)\rvert},\]
it follows that
\[\lvert\partial_{i}A\cap(B\times B)\rvert\geq\lvert\pi_{i}(A\cap B)\rvert- \frac{\lvert\pi_{i}(B)\rvert}{\lvert B\rvert}\lvert A\cap B\rvert.\]
Hence, by (6.12),
\[\lvert\partial A\cap(B\times B)\rvert =\sum_{i=1}^{d}\lvert\partial_{i}A\cap(B\times B)\rvert\geq\sum _{i=1}^{d}\left(\lvert\pi_{i}(A\cap B)\rvert-\frac{\lvert\pi_{i}(B)\rvert}{ \lvert B\rvert}\lvert A\cap B\rvert\right)\] \[=\left(\sum_{i=1}^{d}\lvert\pi_{i}(A\cap B)\rvert\right)-\frac{d}{ N}\lvert A\cap B\rvert\geq d\lvert A\cap B\rvert^{1-\frac{1}{d}}-\frac{d}{N} \lvert A\cap B\rvert\] \[=\frac{d}{N}\left(\sqrt[d]{\lvert B\rvert}\overbrace{\sqrt[d]{ \lvert A\cap B\rvert}}^{d}-1\right)\lvert A\cap B\rvert\geq\frac{d(\sqrt[d]{ \lvert 2}-1)}{N}\lvert A\cap B\rvert,\]
and (6.11) follows since \(d(\sqrt[d]{2}-1)>\ln 2>2/3\).
**Lemma 6.9** (Functional isoperimetric inequality on \(\mathbb{Z}^{d}\)).: _For every shift function \(\tau\),_
\[\operatorname{TV}(\tau)\geq 2d\left(\sum_{u\in\mathbb{Z}^{d}}\lvert\tau(u)\rvert \right)^{1-\frac{1}{d}}.\]
Proof.: WLOG we may assume that \(\tau\) is non-negative, since \(\operatorname{TV}(\lvert\tau\rvert)\leq\operatorname{TV}(\tau)\) by the triangle inequality. Define the family of sets \(A_{k}:=\left\{u\in\mathbb{Z}^{d}\colon\tau(u)>k\right\}\). By definition, we have that
\(\operatorname{TV}(\tau):=\sum_{k\geq 0}|\partial A_{k}|\) and \(\sum_{u\in\mathbb{Z}^{d}}|\tau(u)|=\sum_{k\geq 0}|A_{k}|\) (note that in both sums all but finitely many of the terms are non-zero) and so showing
\[\sum_{k\geq 0}|\partial A_{k}|\geq 2d\left(\sum_{k\geq 0}|A_{k}|\right)^{1-\frac{ 1}{d}}\]
would be sufficient. Using (6.10) for each of the \(A_{k}\)'s one gets \(\sum_{k\geq 0}|\partial A_{k}|\geq 2d\sum_{k\geq 0}|A_{k}|^{1-\frac{1}{d}}\). The result follows by setting \(\lambda_{k}:=|A_{k}|/\sum_{k\geq 0}|A_{k}|\) for every \(k\geq 0\) and noting that \(0\leq\lambda_{k}\leq 1\) for every \(k\geq 0\) and hence
\[\sum_{k\geq 0}|A_{k}|^{1-\frac{1}{d}}\bigg{/}\left(\sum_{k\geq 0}|A_{k}| \right)^{1-\frac{1}{d}}=\sum_{k\geq 0}\lambda_{k}^{1-\frac{1}{d}}\geq\sum_{k \geq 0}\lambda_{k}=1.\qed\]
Recall that for every \(u\in\mathbb{Z}^{d}\),
\[\tau_{N}(u)=\left[\tau_{N}^{\operatorname{rough}}(u)\right],\]
where by \([a]\) we denote the nearest integer to \(a\) and
\[\tau_{N}^{\operatorname{rough}}(u):=\frac{1}{N^{d}}\sum_{v\in Q_{N}(N\,w)}\tau (v),\]
where \(w\) is the unique point in \(\mathbb{Z}^{d}\) such that \(u\in Q_{N}(N\,w)\).
Proof of Proposition 5.2.: Let \(\tau\) be a shift. Observe that if \(N\) is a positive integer such that \(\sum_{u\in\mathbb{Z}^{d}}|\tau(u)|<N^{d}/2\) then necessarily \(\tau_{N}\equiv 0\). Hence, by Lemma 6.9, \(\tau_{N}\equiv 0\) if
\[\left(\frac{\operatorname{TV}(\tau)}{2d}\right)^{\frac{d}{d-1}}<\frac{1}{2}N^ {d},\]
i.e.,
\[N>\sqrt[d]{2}\left(\frac{\operatorname{TV}\left(\tau\right)}{2d}\right)^{ \frac{1}{d-1}}.\qed\]
For a shift \(\tau\colon\mathbb{Z}^{d}\to\mathbb{Z}\) and a set \(A\subset\mathbb{Z}^{d}\), define
\[\operatorname{TV}(\tau;A):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{A}{2} }|\tau(u)-\tau(v)|.\]
**Lemma 6.10**.: _For every \(0<\alpha<\frac{1}{2}\) and \(u\in\mathbb{Z}^{d}\) such that \(|\tau_{N}^{\operatorname{rough}}(N\,u)-\tau_{N}(N\,u)|>\alpha\), it holds that_
\[\operatorname{TV}\left(\tau;Q_{N}(N\,u)\right)\geq\frac{2\alpha}{3}N^{d-1}.\]
Proof.: For simplicity, denote \(B:=Q_{N}(N\,u)\) and let \(m:=\min_{v\in B}\tau(v)\), \(M:=\max_{v\in B}\tau(v)\). For every integer \(k\), let \(A_{k}:=\{v\in B\colon\tau(v)>k\}\). Note that \(A_{m-1}=B\) and \(A_{M}=\emptyset\). Let \(\ell:=\min\{m\leq k\leq M\colon|A_{k}|<\frac{1}{2}N^{d}\}\).
Note that \(\operatorname{TV}\left(\tau;B\right)=\sum_{k=m}^{M-1}|\partial A_{k}\cap(B \times B)|\) and hence, by (6.11),
\[\operatorname{TV}\left(\tau;B\right)\geq\frac{2}{3N}\sum_{k=m}^{M-1}\min\left\{ |A_{k}|,|B\setminus A_{k}|\right\}=\frac{2}{3N}\sum_{k=m}^{\ell-1}|B\setminus A _{k}|+\frac{2}{3N}\sum_{k=\ell}^{M-1}|A_{k}|.\]
Hence, the result follows if \(\sum_{k=m}^{\ell-1}\lvert B\setminus A_{k}\rvert\geq\alpha N^{d}\) or \(\sum_{k=\ell}^{M-1}\lvert A_{k}\rvert\geq\alpha N^{d}\). Assume by way of contradiction that \(\sum_{k=m}^{\ell-1}\lvert B\setminus A_{k}\rvert<\alpha N^{d}\) and \(\sum_{k=\ell}^{M-1}\lvert A_{k}\rvert<\alpha N^{d}\). Now, since
\[\tau_{N}^{\mathrm{rough}}(N\,u)=\frac{1}{N^{d}}\sum_{v\in B}\tau(v)=m+\sum_{k=m }^{M-1}\frac{\lvert A_{k}\rvert}{N^{d}}=\ell-\sum_{k=m}^{\ell-1}\frac{\lvert B \setminus A_{k}\rvert}{N^{d}}+\sum_{k=\ell}^{M-1}\frac{\lvert A_{k}\rvert}{N^ {d}}\]
it follows that \(\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\ell\rvert<\alpha\). In particular, \(\tau_{N}(N\,u)=\ell\) and we get a contradiction.
**Lemma 6.11**.: _For every \(\{u,v\}\in E(\mathbb{Z}^{d})\),_
\[\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v) \right\rvert\leq\frac{1}{N^{d-1}}\operatorname{TV}\left(\tau;Q_{N}(N\,u)\cup Q _{N}(N\,v)\right).\]
Proof.: With no loss of generality, assume that \(v=u+e_{d}\). Then,
\[\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v)=\frac{1}{N^{d }}\sum_{w\in B}\left(\sum_{i=0}^{N-1}\tau(w+ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie _{d})\right),\]
where \(B:=N\,u+\{0,1,2,\ldots,N-1\}^{d-1}\times\{0\}\). For every \(w\in B\), it holds that
\[\sum_{i=0}^{N-1}\tau(w+ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie_{d})=\sum_{i=1}^{2N- 1}\min\{i,2N-i\}\left(\tau(w+(i-1)e_{d})-\tau(w+ie_{d})\right).\]
and hence
\[\left\lvert\sum_{i=0}^{N-1}\tau(w+ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie_{d}) \right\rvert\leq N\sum_{i=1}^{2N-1}\lvert\tau(w+(i-1)e_{d})-\tau(w+ie_{d})\rvert.\]
Therefore,
\[\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N \,v)\rvert =\frac{1}{N^{d}}\sum_{w\in B}\left\lvert\sum_{i=0}^{N-1}\tau(w+ ie_{d})-\sum_{i=N}^{2N-1}\tau(w+ie_{d})\right\rvert\] \[\leq\frac{1}{N^{d-1}}\sum_{w\in B}\sum_{i=1}^{2N-1}\lvert\tau(w+( i-1)e_{d})-\tau(w+ie_{d})\rvert\] \[\leq\operatorname{TV}\left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right).\qed\]
Proof of Proposition 6.6.: It is clearly enough to prove that for every \(\{u,v\}\in E(\mathbb{Z}^{d})\),
\[\lvert\tau_{N}(N\,u)-\tau_{N}(N\,v)\rvert\leq\frac{5}{N^{d-1}}\operatorname{ TV}\left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right).\]
If \(\tau_{N}(N\,u)=\tau_{N}(N\,v)\) there is nothing to prove. If \(\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v) \right\rvert\geq\frac{1}{3}\), then
\[\lvert\tau_{N}(N\,u)-\tau_{N}(N\,v)\rvert\leq\left\lvert\tau_{N}^{\mathrm{ rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v)\right\rvert+1\leq 4\left\lvert \tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v)\right\rvert\]
and the result follows from Lemma 6.11.
Therefore, suppose that \(\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}^{\mathrm{rough}}(N\,v) \right\rvert<\frac{1}{3}\) but \(\tau_{N}(N\,u)\neq\tau_{N}(N\,v)\). Then, necessarily,
\[\max\left\{\left\lvert\tau_{N}^{\mathrm{rough}}(N\,u)-\tau_{N}(N\,u)\right\rvert,\left\lvert\tau_{N}^{\mathrm{rough}}(N\,v)-\tau_{N}(N\,v)\right\rvert\right\} >\frac{1}{3}.\]
If \(|\tau_{N}^{\text{rough}}(N\,u)-\tau_{N}(N\,u)|>\frac{1}{3}\) then by Lemma 6.10,
\[|\tau_{N}(N\,u)-\tau_{N}(N\,v)|=1\leq\frac{9}{2N^{d-1}}\,\text{TV}\left(\tau;Q_{ N}(N\,u)\right)\]
and similarly, if \(|\tau_{N}^{\text{rough}}(N\,v)-\tau_{N}(N\,v)|>\frac{1}{3}\) then by Lemma 6.10,
\[|\tau_{N}(N\,u)-\tau_{N}(N\,v)|=1\leq\frac{9}{2N^{d-1}}\,\text{TV}\left(\tau;Q _{N}(N\,v)\right).\qed\]
**Lemma 6.12**.: _Let \(w\in\mathbb{Z}^{d}\), let \(g\) be a function from \(B:=Q_{2}(2\,w)\) to \(\mathbb{R}\), and let \(\mu:=\frac{1}{2^{d}}\sum_{u\in B}g(u)\). Then,_
\[\sum_{u\in B}|g(u)-\mu|\leq\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap{B\choose 2}} |g(u)-g(v)|.\]
Proof.: We first prove by induction on \(k\) that for every \(1\leq k\leq d\) that
\[\sum_{\begin{subarray}{c}\{u,v\}\in{B\choose 2}\\ \|u-v\|_{1}=k\end{subarray}}|g(u)-g(v)|\leq{d-1\choose k-1}\sum_{\{u,v\}\in E( \mathbb{Z}^{d})\cap{B\choose 2}}|g(u)-g(v)|. \tag{6.13}\]
The base case \(k=1\) obviously holds as equality, and if (6.13) holds for some \(k\), then it follows that it holds for \(k+1\) as well, since
\[\sum_{\begin{subarray}{c}\{u,v\}\in{B\choose 2}\\ \|u-v\|_{1}=k+1\end{subarray}}|g(u)-g(v)|=\frac{1}{2}\sum_{u\in B}\sum_{ \begin{subarray}{c}v\in B\\ \|u-v\|_{1}=k+1\end{subarray}}|g(u)-g(v)|\] \[\leq \frac{1}{2}\sum_{u\in B}\sum_{\begin{subarray}{c}v\in B\\ \|u-v\|_{1}=k+1\end{subarray}}\frac{1}{k+1}\sum_{\begin{subarray}{c}w\in B\\ \|u-w\|_{1}=k,\,w\sim v\end{subarray}}\left(|g(u)-g(w)|+|g(w)-g(v)|\right)\] \[= \frac{1}{2}\sum_{u\in B}\frac{d-k}{k+1}\sum_{\begin{subarray}{c}w \in B\\ \|u-w\|_{1}=k\end{subarray}}|g(u)-g(w)|+\frac{1}{2}\sum_{v\in B}\frac{1}{k+1}{ d-1\choose k}\sum_{\begin{subarray}{c}w\in B\\ w\sim v\end{subarray}}|g(w)-g(v)|\] \[= \frac{d-k}{k+1}\sum_{\begin{subarray}{c}\{u,w\}\in{B\choose 2}\\ \|u-w\|_{1}=k\end{subarray}}|g(u)-g(w)|+\frac{1}{k+1}{d-1\choose k}\sum_{ \begin{subarray}{c}\{v,w\}\in E(\mathbb{Z}^{d})\cap{B\choose 2}\end{subarray}}|g(v)-g(w)|.\]
Now we can conclude the proof of the Lemma.
\[\sum_{u\in B}\lvert g(u)-\mu\rvert \leq\sum_{u\in B}\frac{1}{2^{d}}\sum_{v\in B}\lvert g(u)-g(v)\rvert\] \[=\frac{1}{2^{d-1}}\sum_{\{u,v\}\in\binom{B}{2}}\lvert g(u)-g(v) \rvert=\frac{1}{2^{d-1}}\sum_{k=1}^{d}\sum_{\begin{subarray}{c}\{u,v\}\in \binom{B}{2}\\ \lVert u-v\rVert_{1}=k\end{subarray}}\lvert g(u)-g(v)\rvert\] \[\leq\frac{1}{2^{d-1}}\sum_{k=1}^{d}\binom{d-1}{k-1}\sum_{\{u,v\} \in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\lvert g(u)-g(v)\rvert\] \[=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\lvert g(u)-g (v)\rvert.\qed\]
Proof of Proposition 6.7.: It is clearly enough to prove that for every \(w\in\mathbb{Z}^{d}\), it holds that
\[\sum_{u\in Q_{2N}(2N\,w)}\lvert\tau_{2N}(u)-\tau_{N}(u)\rvert\] \[\leq 9N\sum_{v\in B}\operatorname{TV}\left(\tau;Q_{N}(N\,v) \right)+4N\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\operatorname{TV} \left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right),\]
where \(B:=Q_{2}(2\,w)\). Let
\[A:=\left\{v\in B\colon\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{N}(N \,v)\rvert>\frac{1}{6}\right\}.\]
Note that for every \(v\in B\), if \(\tau_{N}(N\,v)=\tau_{2N}(N\,v)\) or \(\left\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{2N}^{\operatorname{ rough}}(N\,v)\right\rvert\geq\frac{1}{3}\) then \(\lvert\tau_{N}(N\,v)-\tau_{2N}(N\,v)\rvert\leq 4\left\lvert\tau_{N}^{ \operatorname{rough}}(N\,v)-\tau_{2N}^{\operatorname{rough}}(N\,v)\right\rvert\) and if \(\tau_{N}(N\,v)\neq\tau_{2N}(N\,v)\) and \(\left\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{2N}^{\operatorname{ rough}}(N\,v)\right\rvert<\frac{1}{3}\), then \(\lvert\tau_{N}(N\,v)-\tau_{2N}(N\,v)\rvert=1\) and \(v\in A\). Hence,
\[\frac{1}{N^{d}}\sum_{u\in Q_{2N}(2N\,w)}\lvert\tau_{N}(u)-\tau_{ 2N}(u)\rvert =\sum_{v\in B}\lvert\tau_{N}(N\,v)-\tau_{2N}(N\,v)\rvert\] \[\leq\lvert A\rvert+4\sum_{v\in B}\lvert\tau_{N}^{\operatorname{ rough}}(N\,v)-\tau_{2N}^{\operatorname{rough}}(N\,v)\rvert,\]
and we are done since by Lemma 6.10,
\[\lvert A\rvert\leq\frac{9}{N^{d-1}}\sum_{v\in A}\operatorname{TV}\left(\tau;Q _{N}(N\,v)\right)\]
and by Lemma 6.12 and Lemma 6.11,
\[\sum_{v\in B}\lvert\tau_{N}^{\operatorname{rough}}(N\,v)-\tau_{2N }^{\operatorname{rough}}(N\,v)\rvert \leq\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{B}{2}}\lvert \tau_{N}^{\operatorname{rough}}(N\,u)-\tau_{N}^{\operatorname{rough}}(N\,v)\rvert\] \[\leq\frac{1}{N^{d-1}}\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap \binom{B}{2}}\operatorname{TV}\left(\tau;Q_{N}(N\,u)\cup Q_{N}(N\,v)\right).\qed\]
#### 6.2.2. Bounding the total variation and weighted difference for fine grainings
In this section we prove analogous results to Propositions 6.6 and 6.7 for fine grainings, and deduce Proposition 5.1. For \(I\in\binom{[d]}{r}\), let \(T_{I}\colon\mathbb{Z}^{d}\to\mathbb{Z}^{d}\) be defined as follows: for any \(x=(x_{1},x_{2},\ldots,x_{d})\in\mathbb{Z}^{d}\) and every \(1\leq i\leq d\), the \(i\)th coordinate of \(T_{I}(x)\) is \(2x_{i}\) if \(i\in I\) and \(x_{i}\) otherwise. Recall that for every \(u\in\mathbb{Z}^{d}\),
\[\tau_{I}(u)=\left[\tau_{I}^{\text{rough}}(u)\right],\]
where by \([a]\) we denote the nearest integer to \(a\) and
\[\tau_{I}^{\text{rough}}(u):=\frac{1}{2^{r}}\sum_{v\in Q_{I}(T_{I}(w))}\tau(v),\]
where \(w\) is the unique point in \(\mathbb{Z}^{d}\) such that \(u\in Q_{I}(T_{I}(w))\).
**Lemma 6.13**.: _For every \(I\in\binom{[d]}{r}\) and \(\{u,v\}\in E(\mathbb{Z}^{d})\),_
\[\left|\tau_{I}^{\text{rough}}(T_{I}(u))-\tau_{I}^{\text{rough}}(T_{I}(v)) \right|\leq\frac{1}{2^{r-1}}\operatorname{TV}\left(\tau;Q_{I}(T_{I}(u))\cup Q _{I}(T_{I}(v))\right). \tag{6.14}\]
Proof.: If \(u-v\in\{-e_{i},e_{i}\}\) for \(i\notin I\), then (6.14) easily follows by a straightforward use of the triangle inequlity; otherwise, (6.14) is simply the claim of Lemma 6.11 for \(N=2\) in the appropriate \(r\)-dimensional affine subspace of \(\mathbb{Z}^{d}\).
**Proposition 6.14**.: _Let \(I\in\binom{[d]}{r}\) be chosen uniformly at random. Then, for every \(\tau\colon\Lambda\to\mathbb{Z}\), the following holds:_
\[\operatorname{\mathbb{E}}\operatorname{TV}(\tau_{I})\leq 10(2r+1)\operatorname{ TV}(\tau)\]
Proof.: For every \(\{x,y\}\in E(\mathbb{Z}^{d})\), let \(X_{\{x,y\}}\) be the random variable defined as follows: \(X_{\{x,y\}}=2d\) if \(x-y\in\{e_{i},-e_{i}\}\) for \(i\in I\) and \(X_{\{x,y\}}=1\) otherwise. Note that for every \(\{x,y\}\in E(\mathbb{Z}^{d})\), it holds that \(\operatorname{\mathbb{E}}X_{\{x,y\}}=\frac{r}{d}\cdot 2d+\left(1-\frac{r}{d} \right)\cdot 1<2r+1\).
Note that for every \(\{x,y\}\in E(\mathbb{Z}^{d})\),
\[X_{\{x,y\}}=|\{u,v\}\in E(\mathbb{Z}^{d})\colon\{x,y\}\subset Q_{I}(T_{I}(u) \cup Q_{I}(T_{I}(v))\}|.\]
The same argument as in the proof of Proposition 6.6, where Lemma 6.11 is replaced by Lemma 6.13, and Lemma 6.10 is applied in an appropriate \(r\)-dimensional affine subspace of \(\mathbb{Z}^{d}\), yields that for every \(\{u,v\}\in E(\mathbb{Z}^{d})\),
\[|\tau_{I}(T_{I}(u))-\tau_{I}(T_{I}(v))|\leq\frac{5}{2^{r-1}}\operatorname{TV} \left(\tau;Q_{I}(T_{I}(u))\cup Q_{I}(T_{I}(v))\right).\]
Therefore,
\[\operatorname{TV}(\tau_{I}) \leq 2^{r}\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}|\tau_{I}(T_{I}(u))- \tau_{I}(T_{I}(v))|\] \[\leq 10\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}\operatorname{TV}(\tau ;Q_{I}(T_{I}(u)\cup Q_{I}(T_{I}(v)))=10\sum_{\{x,y\}\in E(\mathbb{Z}^{d})}| \tau(x)-\tau(y)|X_{\{x,y\}}.\]
Hence,
\[\operatorname{\mathbb{E}}\operatorname{TV}(\tau_{I})\leq 10\sum_{\{x,y\}\in E( \mathbb{Z}^{d})}|\tau(x)-\tau(y)|\operatorname{\mathbb{E}}X_{\{x,y\}}\leq 1 0(2r+1)\operatorname{TV}(\tau).\qed\]
**Proposition 6.15**.: _Let \(I\in\binom{[d]}{r}\) be chosen uniformly at random. Then, for every \(\tau\colon\Lambda\to\mathbb{Z}\), the following holds:_
\[\mathbb{E}\|\tau_{I}-\tau\|_{1}\leq\frac{2r}{d}\operatorname{TV}(\tau).\]
Proof.: For every \(1\leq i\leq d\), let \(X_{i}\) be the random variable defined as follows: \(X_{i}=1\) if \(i\in I\) and \(X_{i}=0\) otherwise. For every \(u\in\mathbb{Z}^{d}\) it holds that \(|\tau(u)-\tau_{I}(u)|\leq 2\left|\tau(u)-\tau_{I}^{\operatorname{rough}}(u)\right|\). Hence, for every \(w\in\mathbb{Z}^{d}\), by Lemma 6.12,
\[\sum_{v\in Q_{I}(T_{I}(w))}|\tau(u)-\tau_{I}(u)| \leq 2\sum_{v\in Q_{I}(T_{I}(w))}|\tau(u)-\tau_{I}^{\operatorname{ rough}}(u)|\] \[\leq 2\sum_{\{u,v\}\in E(\mathbb{Z}^{d})\cap\binom{Q_{I}(T_{I}(w ))}{2}}|\tau(u)-\tau(v)|.\]
Therefore,
\[\|\tau_{I}-\tau\|_{1}\leq 2\sum_{i\in I}\sum_{\begin{subarray}{c}\{u,v\}\in E (\mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|=2\sum_{i=1}^{d}\sum_{ \begin{subarray}{c}\{u,v\}\in E(\mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|X_{i}\]
and hence,
\[\mathbb{E}\|\tau_{I}-\tau\|_{1} \leq 2\sum_{i=1}^{d}\sum_{\begin{subarray}{c}\{u,v\}\in E( \mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|\mathbb{E}X_{i}\] \[=\frac{2r}{d}\sum_{i=1}^{d}\sum_{\begin{subarray}{c}\{u,v\}\in E (\mathbb{Z}^{d})\\ u-v\in\{-e_{i},e_{i}\}\end{subarray}}|\tau(u)-\tau(v)|=\frac{2r}{d} \operatorname{TV}(\tau).\qed\]
Proof of Proposition 5.1.: Let \(I\in\binom{[d]}{r}\) be chosen uniformly at random, By Markov inequality and Propositions 6.14 and 6.15,
\[\mathbb{P}\left(\operatorname{TV}(\tau_{I})\geq 20(2r+1)\operatorname{TV}( \tau)+1\right)\leq\frac{\mathbb{E}\operatorname{TV}(\tau_{I})}{20(2r+1) \operatorname{TV}(\tau)+1}<\frac{1}{2}\]
and
\[\mathbb{P}\left(\|\tau_{I}-\tau\|_{1}\geq\frac{4r}{d}\operatorname{TV}(\tau) \right)\leq\frac{\mathbb{E}\|\tau_{I}-\tau\|_{1}}{\frac{4r}{d}\operatorname{ TV}(\tau)}\leq\frac{1}{2}.\]
Hence,
\[\mathbb{P}\left(\operatorname{TV}(\tau_{I})\leq 20(2r+1)\operatorname{TV}(\tau) \text{ and }\|\tau_{I}-\tau\|_{1}<\frac{4r}{d}\operatorname{TV}(\tau)\right)>0\]
and the result follows.
#### 6.2.3. Entropy bounds
**Lemma 6.16**.: _For every \(A\subset\mathbb{Z}^{d}\) and finite \(B\subseteq\partial^{\operatorname{out}}A\), there is a set \(S\subseteq B\) such that \(|S|<\frac{1}{d}|\partial A|\) and \(B\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\)._
Proof.: We first show that for every \(a\in\partial^{\mathrm{out}}A\),
\[\left|\partial A\cap\left(\mathbb{Z}^{d}\times\mathcal{B}_{2}(a)\right)\right|>d. \tag{6.15}\]
With no loss of generality assume that \(a+e_{d}\in A\), and let \(E:=\{-e_{i}\}_{i=1}^{d-1}\cup\{e_{i}\}_{i=1}^{d-1}\). For every \(u\in E\), denote
\[\mathcal{T}_{u}:=\left\{(a+u,a),(a+e_{d},a+u++e_{d}),(a+u+e_{d},a+u)\right\}.\]
If \(a+u\in A\) then \((a_{u},a)\in\partial A\); if \(a+u+e_{d}\notin A\) then \((a+e_{d},a+u+e_{d})\in\partial A\); finally, if \(a+u\notin A\) and \(a+u+e_{d}\in A\) then \((a+u+e_{d},a+u)\in\partial A\). Hence, \(\partial A\cap\mathcal{T}_{u}\neq\emptyset\) for every \(u\in E\), and (6.15) follows since the \(2d-2>d\) sets \(\{\mathcal{T}_{u}\}_{u\in E}\) are mutually disjoint.
Now, let \(S\) be a set of maximal cardinality in \(B\) such that the sets \(\{\mathcal{B}_{2}(a)\}_{a\in S}\) are mutually disjoint. The maximality of \(S\) implies that \(B\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\), and by (6.15),
\[|S|<\frac{1}{d}\sum_{a\in S}\left|\partial A\cap\left(\mathbb{Z}^{d}\times \mathcal{B}_{2}(a)\right)\right|\leq\frac{1}{d}|\partial A|.\qed\]
We will say that a set \(A\subseteq\mathbb{Z}^{d}\) is \(\ell_{1}^{+}\)_-connected_ if for any two points \(a,b\in A\) there is a sequence \(a=s_{0},s_{1},\ldots,s_{n}=b\) of points in \(A\) such that \(\|s_{i-1}-s_{i}\|_{1}\leq 2\) for every \(1\leq i\leq n\).
**Lemma 6.17**.: _Let \(A\subset\mathbb{Z}^{d}\) be an \(\ell_{1}^{+}\)-connected finite set, and assume that there is a set \(S\subseteq A\) such that \(A\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\). Then,_
\[\mathrm{diam}(A)<10|S| \tag{6.16}\]
_Moreover, there is an ordering \(a_{1},a_{2},\ldots,a_{|S|}\) of \(S\) such that, denoting \(a_{|S|+1}:=a_{1}\),_
\[\sum_{i=1}^{|S|}\|a_{i}-a_{i+1}\|_{1}<20|S|. \tag{6.17}\]
_Consequently, for every finite \(\Omega\subset\mathbb{Z}^{d}\), there is an ordering \(\omega_{1},\omega_{2},\ldots,\omega_{|\Omega|}\) of \(\Omega\) such that, denoting \(\omega_{|\Omega|+1}:=\omega_{1}\),_
\[\sum_{i=1}^{|\Omega|}\|\omega_{i}-\omega_{i+1}\|_{1}<20|S|+8|\Omega|+2\sum_{ \omega\in\Omega}\mathrm{dist}(\omega,A). \tag{6.18}\]
Proof.: Consider the complete graph \(K\) on the vertex set \(S\), and its spanning subgraph \(G\) in which \(a,b\in S\) are adjacent if there are \(u\in\mathcal{B}_{4}(a)\), \(v\in\mathcal{B}_{4}(b)\) such that \(\|u-v\|_{1}\leq 2\). For any edge \(e=\{a,b\}\) of \(K\), denote \(\|e\|:=\|a-b\|_{1}\). Note that \(\|e\|\leq 10\) for every edge \(e\) of \(G\). Since \(A\) is \(\ell_{1}^{+}\)-connected, it follows that the graph \(G\) is connected. Let \(\mathcal{T}\) be a spanning tree of \(G\).
To prove (6.16), we need to show that \(\|a-\tilde{a}\|_{1}<10|S|\) for every \(a,\tilde{a}\in A\). There are \(s,\tilde{s}\in S\) such that \(a\in\mathcal{B}_{4}(s)\) and \(\tilde{a}\in\mathcal{B}_{4}(\tilde{s})\). Let \(s=s_{0},s_{1},\ldots,s_{k}=\tilde{s}\) be the unique path from \(s\) to \(\tilde{s}\) in \(\mathcal{T}\). Then,
\[\|a-\tilde{a}\|_{1}\leq\|a-s\|_{1}+\sum_{i=1}^{k}\|s_{i-1}-s_{i}\|_{1}+\| \tilde{s}-\tilde{a}\|_{1}\leq 4+10k+4<10(k+1)\leq 10|S|.\]
Using the structure of the tree, we may arrange the edges of \(\mathcal{T}\), each taken in both directions to create a directed cycle \(\mathcal{C}_{0}\) that goes through all the vertices. Let \(\mathcal{C}_{1}\) be the simple cycle in
\(K\) obtained from \(\mathcal{C}_{0}\) by omitting multiple occurrences of vertices. Then, by using the triangle inequality,
\[\sum_{e\in E(\mathcal{C}_{1})}\|e\|\leq\sum_{e\in E(\mathcal{C}_{0})}\|e\|=2\sum_{ e\in E(\mathcal{T})}\|e\|\leq 20|E(\mathcal{T})|=20(|S|-1)\]
which proves (6.17).
Finally, let \(\Omega\subset\mathbb{Z}^{d}\) be a finite set. Let \(a_{1},a_{2},\ldots,a_{|S|}\) be an ordering of \(S\) such that (denoting \(a_{|S|+1}:=a_{1}\)) \(\sum_{i=1}^{|S|}\|a_{i}-a_{i+1}\|_{1}<20|S|\). For every \(\omega\in\Omega\), there is \(1\leq n(\omega)\leq|S|\) such that \(\|\omega-a_{n(\omega)}\|_{1}\leq 4+\operatorname{dist}(\omega,A)\). Let \(\omega_{1},\omega_{2},\ldots,\omega_{|\Omega|}\) be an ordering of \(\Omega\) such that \(n(\omega_{j})\leq n(\omega_{j+1})\) for every \(1\leq j<|\Omega|\) (it is easy to see that such orderings exist), and denote \(\omega_{|\Omega|+1}:=\omega_{1}\). Then, for every \(1\leq j\leq|\Omega|\),
\[\|\omega_{j}-\omega_{j+1}\|_{1} \leq\|\omega_{j}-a_{n(\omega_{j})}\|_{1}+\sum_{i=n(\omega_{j})}^{ n(\omega_{j+1})-1}\|a_{i}-a_{i+1}\|_{1}+\|a_{n(\omega_{j+1})}-\omega_{j+1}\|_{1}\] \[\leq\sum_{i=n(\omega_{j})}^{n(\omega_{j+1})-1}\|a_{i}-a_{i+1}\|_{1 }+8+\operatorname{dist}(\omega_{j},A)+\operatorname{dist}(\omega_{j+1},A),\]
where, for \(j=|\Omega|\), the sum \(\sum_{i=n(\omega_{|\Omega|})}^{n(\omega_{1})-1}\|a_{i}-a_{i+1}\|_{1}\) should be interpreted as \(\sum_{i=n(\omega_{|\Omega|})}^{|S|}\|a_{i}-a_{i+1}\|_{1}+\sum_{i=1}^{n(\omega_ {1})-1}\|a_{i}-a_{i+1}\|_{1}\). Hence,
\[\sum_{j=1}^{|\Omega|}\|\omega_{j}-\omega_{j+1}\|_{1} \leq\sum_{i=1}^{|S|}\|a_{i}-a_{i+1}\|_{1}+8|\Omega|+2\sum_{\omega \in\Omega}\operatorname{dist}(\omega,A)\] \[<20|S|+8|\Omega|+2\sum_{\omega\in\Omega}\operatorname{dist}(\omega,A).\qed\]
Following Timar [113], we define, for a set \(A\subseteq\mathbb{Z}^{d}\) and \(v\in\mathbb{Z}^{d}\cup\{\infty\}\), the outer vertex boundary of \(A\) visible from \(v\):
\[\partial_{\operatorname{vis}(v)}A:=\left\{u\in\partial^{\operatorname{out}}A \colon\text{there exists a path from $u$ to $v$ not intersecting $A$}\right\}. \tag{6.19}\]
**Observation 6.18**.: _For every bounded \(A\subseteq\mathbb{Z}^{d}\) and every \(u\in A\) it holds that_
\[\operatorname{dist}(u,\partial_{\operatorname{vis}(\infty)}A)<\operatorname{ diam}(\partial_{\operatorname{vis}(\infty)}A).\]
Proof.: There is \(w\in\partial_{\operatorname{vis}(\infty)}A\) such that \(\operatorname{dist}(u,\partial_{\operatorname{vis}(\infty)}A)=\|u-w\|_{1}\). With no loss of generality we may assume that the first coordinate of \(u-w\) is non-negative. Let \(n_{0}:=\max\{n\in\mathbb{Z}\colon u+ne_{1}\in A\}+1\). Obviously, \(u+n_{0}e_{1}\in\partial_{\operatorname{vis}(\infty)}A\). Therefore,
\[\operatorname{dist}(u,\partial_{\operatorname{vis}(\infty)}A)=\|u-w\|_{1}<\|( u+n_{0}e_{1})-w\|_{1}\leq\operatorname{diam}(\partial_{\operatorname{vis}( \infty)}A).\qed\]
The following lemma is a special case of [113, Theorem 3].
**Lemma 6.19**.: _For every connected \(A\subseteq\mathbb{Z}^{d}\) and every \(v\in\mathbb{Z}^{d}\cup\{\infty\}\), the set \(\partial_{vis(v)}A\) is \(\ell_{1}^{+}\)-connected._
**Observation 6.20**.: _The number of level components of \(\tau_{N}\) satisfies the following bound_
\[|\mathcal{LC}(\tau_{N})|\leq\frac{\operatorname{TV}(\tau_{N})}{dN^{d-1}}\]
_and similarly, for the number of level components of \(\tau_{I}\),_
\[|\mathcal{LC}(\tau_{I})|\leq\frac{\operatorname{TV}(\tau_{I})}{d\,2^{|I|-1}}.\]
Proof.: For every level component \(A\) of \(\tau_{N}\) it holds, by (6.10), that \(|\partial A|\geq 2d|A|^{1-\frac{1}{d}}\geq 2d\,N^{d-1}\). Hence,
\[|\mathcal{LC}(\tau_{N})|\leq\frac{1}{2d\,N^{d-1}}\sum_{A\in\mathcal{LC}(\tau_ {N})}|\partial A|\leq\frac{1}{2d\,N^{d-1}}2\,\operatorname{TV}(\tau_{N}).\]
Similarly, for every level component \(A\) of \(\tau_{I}\) it holds, by (6.10), that \(|\partial A|\geq 2d|A|^{1-\frac{1}{d}}\geq 2d\,2^{|I|-\frac{|I|}{d}}\geq 2d\,2^{|I|-1}\). Hence,
\[|\mathcal{LC}(\tau_{N})|\leq\frac{1}{2d\,2^{|I|-1}}\sum_{A\in\mathcal{LC}( \tau_{I})}|\partial A|\leq\frac{1}{2d\,2^{|I|-1}}2\,\operatorname{TV}(\tau_{ I}).\qed\]
**Proposition 6.21**.: _Let \(\tau,\tilde{\tau}\) be two shifts and let \(r\) be a positive integer such that for every level component \(\tilde{A}\in\mathcal{LC}(\tilde{\tau})\) of \(\tilde{\tau}\), there exists a level component \(A\in\mathcal{LC}(\tau)\) of \(\tau\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq r\). Then,_
\[R(\tilde{\tau})\leq R(\tau)+\frac{88}{d}\operatorname{TV}(\tau)+(2r+8)| \mathcal{LC}(\tilde{\tau})|.\]
Proof.: For simplicity, denote \(N:=|\mathcal{LC}(\tau)|\). Let \((u_{i})_{i=0}^{N-1}\) be a sequence of points in \(\mathbb{Z}^{d}\) such that \(u_{0}=0\), \(\sum_{i=1}^{N-1}\lVert u_{i-1}-u_{i}\rVert_{1}=R(\tau)\) and each \(u_{i}\) is in a different level component of \(\tau\), which we denote \(A_{i}\). For every \(\tilde{A}\in\mathcal{LC}(\tilde{\tau})\) there are \(0\leq i(\tilde{A})\leq N-1\) and \(\omega(\tilde{A})\in\tilde{A}\) such that \(\operatorname{dist}(\omega(\tilde{A}),\partial_{\operatorname{vis}(\infty)}A_{ i(\tilde{A})})\leq r\). For every \(0\leq i\leq N-1\), let
\[\Omega_{i}:=\{u_{i}\}\cup\{\omega(\tilde{A})\colon\tilde{A}\in\mathcal{LC}( \tilde{\tau}),\,i(\tilde{A})=i\}.\]
By Lemma 6.16, there is a set \(S_{i}\subseteq\partial_{\operatorname{vis}(\infty)}A_{i}\) such that \(|S_{i}|<\frac{1}{d}|\partial A_{i}|\) and \(\partial_{\operatorname{vis}(\infty)}A_{i}\subseteq\bigcup_{a\in S_{i}} \mathcal{B}_{4}(a)\). By Observation 6.18 and (6.16),
\[\operatorname{dist}(x_{i},\partial_{\operatorname{vis}(\infty)}A_{i})< \operatorname{diam}(\partial_{\operatorname{vis}(\infty)}A_{i})<10|S_{i}|< \frac{10}{d}|\partial A_{i}|.\]
The set \(\partial_{\operatorname{vis}(\infty)}A_{i}\) is \(\ell_{1}^{+}\)-connected, by Lemma 6.19. Hence, by (6.18), there is an ordering \(\omega_{1}^{(i)},\omega_{2}^{(i)},\ldots,\omega_{|\Omega_{i}|}^{(i)}\) of \(\Omega_{i}\) such that, denoting \(\omega_{|\Omega_{i}|+1}^{(i)}:=\omega_{1}^{(i)}\),
\[\sum_{j=1}^{|\Omega_{i}|}\lVert\omega_{j}^{(i)}-\omega_{j+1}^{(i) }\rVert_{1}< 20|S_{i}|+8|\Omega_{i}|+2\sum_{\omega\in\Omega_{i}} \operatorname{dist}(\omega,\partial_{\operatorname{vis}(\infty)}A_{i})\] \[< \frac{20}{d}|\partial A_{i}|+8|\Omega_{i}|+\frac{20}{d}|\partial A _{i}|+2(|\Omega_{i}|-1)r\] \[= \frac{40}{d}|\partial A_{i}|+(2r+8)(|\Omega_{i}|-1)+8.\]
With no loss of generality we may assume that \(\omega_{1}^{(i)}=x_{i}\). Hence, for every \(1\leq i\leq N-1\),
\[\lVert\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{2}^{(i)}\rVert_{1} \leq\lVert\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{1}^{(i-1)} \rVert_{1}+\lVert\omega_{1}^{(i-1)}-\omega_{1}^{(i)}\rVert_{1}+\lVert\omega_{1 }^{(i)}-\omega_{2}^{(i)}\rVert_{1}\] \[=\lVert\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{1}^{(i-1)}\rVert_{1 }+\lVert u_{i-1}-u_{i}\rVert_{1}+\lVert\omega_{1}^{(i)}-\omega_{2}^{(i)}\rVert _{1}.\]
Therefore, considering the sequence
\[0=\omega_{1}^{(0)},\omega_{2}^{(0)},\ldots,\omega_{|\Omega_{0}|}^{(0)},\omega_{2}^ {(1)},\omega_{3}^{(1)},\ldots,\omega_{|\Omega_{1}|}^{(1)},\omega_{2}^{(2)},\omega_ {3}^{(2)},\ldots,\omega_{|\Omega_{N-1}|}^{(N-1)},\]
we conclude that
\[R(\tilde{\tau})\leq\sum_{j=1}^{|\Omega_{0}|-1}\|\omega_{j}^{(0)} -\omega_{j+1}^{(0)}\|_{1}+\sum_{i=1}^{N-1}\left(\|\omega_{|\Omega_{i-1}|}^{(i- 1)}-\omega_{2}^{(i)}\|_{1}+\sum_{j=2}^{|\Omega_{i}|-1}\|\omega_{j}^{(i)}- \omega_{j+1}^{(i)}\|_{1}\right)\] \[\leq\sum_{j=1}^{|\Omega_{0}|-1}\|\omega_{j}^{(0)}-\omega_{j+1}^{( 0)}\|_{1}+\sum_{i=1}^{N-1}\left(\|\omega_{|\Omega_{i-1}|}^{(i-1)}-\omega_{1}^ {(i-1)}\|_{1}+\|u_{i-1}-u_{i}\|_{1}+\sum_{j=1}^{|\Omega_{i}|-1}\|\omega_{j}^{( i)}-\omega_{j+1}^{(i)}\|_{1}\right)\] \[=\sum_{i=1}^{N-1}\sum_{j=1}^{|\Omega_{i}|}\|\omega_{j}^{(i)}- \omega_{j+1}^{(i)}\|_{1}-\|\omega_{|\Omega_{N-1}|}^{(N-1)}-\omega_{1}^{(N-1) }\|_{1}+R(\tau)\] \[<\sum_{i=1}^{N-1}\left(\frac{40}{d}|\partial A_{i}|+(2r+8)(| \Omega_{i}|-1)+8\right)+R(\tau)\] \[=\]
and the result follows since \(\sum_{i=1}^{N-1}|\partial A_{i}|\leq 2\operatorname{TV}(\tau)\), \(\sum_{i=1}^{N-1}(|\Omega_{i}|-1)=|\mathcal{LC}(\tilde{\tau})|\) and by Observation 6.20, \(N-1<\operatorname{TV}(\tau)/d\).
**Lemma 6.22**.: _There is a universal constant \(C>0\) such that for every shift \(\tau\) and for every integer \(N\geq 2\),_
\[R(\tau_{N})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau), \tag{6.20}\]
_and for every \(I\in\operatorname{comp}(\tau)\),_
\[R(\tau_{I})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau). \tag{6.21}\]
Proof.: Let \(\tilde{A}\) be a level component of \(\tau_{N}\). By the definition of \(\tau_{N}\), there are necessarily \(u_{1}\sim u_{2}\), both at distance at most \(N\) from \(\tilde{A}\), such that \(u_{1}\in A_{1}\) and \(u_{2}\in A_{2}\), where \(A_{1},A_{2}\) are distinct level components of \(\tau\). It is easy to see that for every two disjoint connected sets \(A_{1},A_{2}\subseteq\mathbb{Z}^{d}\), it holds that
\[E(\mathbb{Z}^{d})\cap(A_{1}\times A_{2})\subseteq\left(A_{1}\times\partial_{ \operatorname{vis}(\infty)}(A_{1})\right)\cup\left(\partial_{\operatorname{ vis}(\infty)}(A_{2})\times A_{2}\right). \tag{6.22}\]
It follows that for every level component \(\tilde{A}\) of \(\tau_{N}\), there is a level component \(A\) of \(\tau\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq N\). Hence, by Proposition 6.21,
\[R(\tau_{N})\leq R(\tau)+\frac{88}{d}\operatorname{TV}(\tau)+(2N+8)|\mathcal{ LC}(\tau_{N})|,\]
and (6.20) follows, since by Observation 6.20 and Proposition 6.6,
\[|\mathcal{LC}(\tau_{N})|\leq\frac{1}{dN^{d-1}}\operatorname{TV}(\tau_{N})\leq \frac{10}{N^{d-1}}\operatorname{TV}(\tau).\]
Similarly, if \(I\subseteq[d]\), then for every level component \(\tilde{A}\) of \(\tau_{I}\), there is a level component \(A\) of \(\tau\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq 2\). Hence, by Proposition 6.21,
\[R(\tau_{I})\leq R(\tau)+\frac{88}{d}\operatorname{TV}(\tau)+12|\mathcal{LC}( \tau_{I})|,\]
and (6.21) follows, for \(I\in\operatorname{comp}(\tau)\), since then, by Observation 6.20,
\[|\mathcal{LC}(\tau_{I})|\leq\frac{1}{d2^{|I|-1}}\mathrm{TV}(\tau_{I})\leq \frac{20(2|I|+1)}{d\,2^{|I|-1}}\mathrm{TV}(\tau).\qed\]
The following observation is a simple one, deriving from the definition of total variation and triangle inequality:
**Observation 6.23**.: _For any two shifts \(\tau,\tau^{\prime}\) the following holds:_
\[\mathrm{TV}(\tau+\tau^{\prime})\leq\mathrm{TV}(\tau)+\mathrm{TV}(\tau^{ \prime}).\]
**Lemma 6.24**.: _There is a universal constant \(c\) such that for any two shifts \(\tau,\tau^{\prime}\),_
\[R\left(\tau+\tau^{\prime}\right) \leq 2R\left(\tau\right)+R\left(\tau^{\prime}\right)+\frac{88}{d} \left(\mathrm{TV}\left(\tau\right)+\mathrm{TV}\left(\tau^{\prime}\right) \right)+\frac{10}{d}\operatorname{TV}(\tau+\tau^{\prime})\] \[\leq 2R\left(\tau\right)+R\left(\tau^{\prime}\right)+\frac{98}{d} \left(\mathrm{TV}\left(\tau\right)+\mathrm{TV}\left(\tau^{\prime}\right) \right),\]
_where the second inequality follows by Observation 6.23._
Proof.: Suppose that \(u_{1}\sim u_{2}\) belong to different level components of \(\tau+\tau^{\prime}\). Then, \(u_{1}\in A_{1}\) and \(u_{2}\in A_{2}\), where \(A_{1}\) and \(A_{2}\) are distinct level components of the same function in \(\{\tau,\tau^{\prime}\}\). By (6.22), \(u_{1}\in\partial_{\operatorname{vis}(\infty)}(A_{2})\) or \(u_{2}\in\partial_{\operatorname{vis}(\infty)}(A_{1})\). It follows that for every \(\tilde{A}\in\mathcal{LC}(\tau+\tau^{\prime})\), there is \(A\in\mathcal{LC}(\tau)\cup\mathcal{LC}(\tau^{\prime})\) such that \(\operatorname{dist}(\tilde{A},\partial_{\operatorname{vis}(\infty)}A)\leq 1\). Then, a similar argument to that of the proof of Proposition 6.21 yields that
\[R(\tau+\tau^{\prime})\leq 2R(\tau)+R(\tau^{\prime})+\frac{88}{d}\left( \mathrm{TV}(\tau)+\mathrm{TV}(\tau^{\prime})\right)+10|\mathcal{LC}(\tau+\tau ^{\prime})|\]
and the result follows since \(|\mathcal{LC}(\tau+\tau^{\prime})|\leq\frac{1}{d}\mathrm{TV}(\tau+\tau^{ \prime})\), by Observation 6.20.
### Enumeration of shifts
The goal of this section is prove Proposition 6.3 and Corollary 6.4. Before proving Proposition 6.3, we first show how it easily implies Corollary 6.4. Intuitively it is clear that the number of possible grainings of a shift of bounded complexity will decrease significantly as the scale of the grainings grows. Corollary 6.4 quantify this simple statement and is a direct result of the previously obtained total variation and trip entropy bounds for coarse and fine grainings, a simple scaling argument, and Proposition 6.3 bounding the number of general shifts with limited total variation and trip entropy.
Proof of Corollary 6.4.: To show the first bound, let \(\mathcal{S}_{N}\) be the set of shifts which are constant in each set of the partition \(\mathcal{P}_{N}\). Then, by Proposition 6.6 and (6.20) there is a universal constant \(C>0\) such that
\[\left\{\tau_{N}\colon\tau\in\mathcal{S},\,\mathrm{TV}(\tau)\leq\lambda,\,R( \tau)\leq\rho\right\}\subseteq\left\{\tau\in\mathcal{S}_{N}\colon\,\mathrm{ TV}(\tau)\leq 10d\lambda,\,R(\tau)\leq\rho+\frac{C}{d}\lambda\right\}.\]
Denote by \(\mu_{N}:\mathbb{Z}^{d}\to\mathbb{Z}^{d}\) the multiplication by \(N\). The mapping \(\tau\mapsto\tau\circ\mu_{N}\) is obviously a bijection of \(\mathcal{S}_{N}\) onto \(\mathcal{S}\), and moreover, for every \(\tau\in\mathcal{S}_{N}\), clearly \(\mathrm{TV}(\tau\circ\mu_{N})\leq\frac{1}{N^{d-1}}\,\mathrm{TV}(\tau)\)
and \(R(\tau\circ\mu_{N})\leq R(\tau)\). Hence,
\[|\{\tau_{N}\colon\tau\in\mathcal{S},\,\mathrm{TV}(\tau)\leq\lambda,\,R( \tau)\leq\rho\}| \leq\left|\left\{\tau\in\mathcal{S}_{N}\colon\,\mathrm{TV}(\tau) \leq 10d\lambda,\,R(\tau)\leq\rho+\frac{C}{d}\lambda\right\}\right|\] \[\leq\left|\left\{\tau\in\mathcal{S}\colon\,\mathrm{TV}(\tau) \leq\frac{10d\lambda}{N^{d-1}},\,R(\tau)\leq\rho+\frac{C}{d}\lambda\right\} \right|.\]
The bound (6.5) now follows directly from Proposition 6.3. The proof of (6.6) is similar.
Proof of Proposition 6.3.: Fix a shift \(\tau\). Let \(J(\tau)\) be the number of level components of \(\tau\), let \((v_{j}(\tau))_{j=1}^{J(\tau)}\) be a sequence of points in \(\mathbb{Z}^{d}\) such that \(v_{1}(\tau)=0\), \(\sum_{j=1}^{J(\tau)-1}\lVert v_{j}(\tau)-v_{j+1}(\tau)\rVert_{1}=R(\tau)\) and there is a unique element of \(\{v_{j}(\tau)\}_{j=1}^{J(\tau)}\) in each level component of \(\tau\). For every \(1\leq j\leq J(\tau)\), let \(L_{j}(\tau)\) be the level component of \(\tau\) containing \(v_{j}(\tau)\). Let \(j_{0}(\tau)\) be the index of the unique unbounded level component of \(\tau\).
Define a partial order \(\leq_{\tau}\) on the set \([J(\tau)]\) as follows: \(i\leq_{\tau}j\) if every path from \(L_{i}\) to \(\infty\) necessarily intersects \(L_{j}\). For every \(j\in[J(\tau)]\), let \(U_{j}(\tau):=\bigcup_{i\leq_{\tau}j}L_{i}(\tau)\). Clearly, \(U_{j_{0}(\tau)}(\tau)=\mathbb{Z}^{d}\).
Let \(\mathcal{G}(\tau)\) be the graph on the vertex set \([J(\tau)]\) in which \(i\neq j\) are adjacent if there are neighbouring \(u\in L_{i}\) and \(v\in L_{j}\). Define a rooted spanning tree \(\mathcal{T}(\tau)\) of \(\mathcal{G}(\tau)\) in the following inductive manner. Set \(V_{0}:=\{j_{0}(\tau)\}\), \(\tilde{V}_{0}:=\{j_{0}(\tau)\}\) and \(E_{0}:=\emptyset\), and for every \(1\leq r\leq J(\tau)\), let \(i_{r}:=\min\tilde{V}_{r-1}\) and set \(V_{r}:=V_{r-1}\cup\{j\in[J(\tau)]\setminus V_{r-1}:j\text{ is adjacent to }i_{r}\text{ in } \mathcal{G}(\tau)\}\), \(\tilde{V}_{r}:=(\tilde{V}_{r-1}\setminus\{i_{r}\})\cup(V_{r}\setminus V_{r-1})\) and \(E_{r}:=E_{r-1}\cup\{(i_{r},j):j\in V_{r}\setminus V_{r-1}\}\). Finally, let \(\mathcal{T}(\tau)\) be the tree on the vertex set \(V_{J(\tau)}=[J(\tau)]\) whose set of (directed) edges is \(E_{J(\tau)}\). For every directed edge \(e=(i,j)\) of \(\mathcal{T}(\tau)\), let \(s_{e}(\tau):=\tau(L_{i})-\tau(L_{j})\).
Clearly, \(\sum_{j=1}^{J(\tau)}\lvert\partial L_{j}(\tau)\rvert\leq 2\mathrm{TV}(\tau)\) and by (6.10), \(\lvert\partial L_{j}(\tau)\rvert\geq 2d\) for every \(1\leq j\leq J(\tau)\). Consequently, \(J(\tau)\leq\frac{1}{2d}\mathrm{TV}(\tau)\). Moreover, clearly \(J(\tau)\leq 1+R(\tau)\). For every positive integer \(J\leq\min\{\frac{\lambda}{2d}.1+\rho\}\), let
\[\tilde{\mathcal{S}}_{J}:=\{\tau\in\mathcal{S}\colon\mathrm{TV}(\tau)\leq \lambda,\,R(\tau)\leq\rho,\,J(\tau)=J\}.\]
The map
\[\chi\colon\tau\mapsto\big{(}J(\tau),j_{0}(\tau),(U_{j}(\tau))_{j_{0}(\tau)\neq j \in[J(\tau)]},(s_{e}(\tau))_{e\in E(\mathcal{T}(\tau))}\big{)}\]
is clearly injective, hence
\[|\{\tau\in\mathcal{S}\colon\mathrm{TV}(\tau)\leq\lambda,\,R(\tau)\leq\rho\}| =\sum_{J\leq\frac{\lambda}{2d}}|\tilde{\mathcal{S}}_{J}|\leq\sum_{J\leq\frac{ \lambda}{2d}}|\chi(\tilde{\mathcal{S}}_{J})|. \tag{6.23}\]
In the estimates below we will use the following estimates several times. First, for every positive integers \(k\) and \(n\),
\[\binom{n}{k}\leq\frac{n^{k}}{k!}<\left(\frac{en}{k}\right)^{k}<\left(\frac{3n} {k}\right)^{k}. \tag{6.24}\]
For every positive integers \(k\) and \(m\) there are no more than \(\min\{k,m\}\) non-zero terms in every sequence \((a_{i})_{i=1}^{k}\) such that \(\sum_{i=1}^{k}\lvert a_{i}\rvert\leq m\) and hence
\[\lvert\{(a_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\colon\,\sum_{i=1}^{k} \lvert a_{i}\rvert\leq m\}\rvert \leq 2^{\min\{k,m\}}\lvert\{(p_{i})_{i=1}^{k}\in\mathbb{Z}\cap[0, \infty)]^{k}\colon\,\sum_{i=1}^{k}p_{i}\leq m\}\rvert\] \[=2^{\min\{k,m\}}\binom{m+k}{k}=2^{\min\{k,m\}}\binom{m+k}{m}.\]
Therefore, for every positive integers \(k\) and \(m\) and real \(\alpha\geq k\), by (6.24),
\[\lvert\{(a_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\colon\,\sum_{i=1}^{k} \lvert a_{i}\rvert\leq m\}\rvert\leq 2^{k}\binom{m+k}{k}\leq\left(6\left( \frac{m}{k}+1\right)\right)^{k}\leq\left(6\left(\frac{m}{\alpha}+1\right) \right)^{\alpha}, \tag{6.25}\]
where the last inequality holds since the function \(t\mapsto\left(\frac{m}{t}+1\right)^{t}\) is increasing in the interval \((0,\infty)\), and also
\[\lvert\{(a_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\colon\,\sum_{i=1}^{k} \lvert a_{i}\rvert\leq m\}\rvert\leq 2^{m}\binom{m+k}{m}\leq\left(6\left( \frac{k}{m}+1\right)\right)^{m}\leq\left(6\left(\frac{\alpha}{m}+1\right) \right)^{m}. \tag{6.26}\]
Let \(\mathbb{B}_{d}\) denote the family of finite \(A\subset\mathbb{Z}^{d}\) such that both \(A\) and \(\mathbb{Z}^{d}\setminus A\) are connected. For every shift \(\tau\) and \(j_{0}(\tau)\neq j\in[J(\tau)]\), the set \(U_{j}(\tau)\) is obviously in \(\mathbb{B}_{d}\). By [1, Theorem 6] (improving on [13, Corollary 1.2]; see more details in Appendix A), it holds that for every \(v\in\mathbb{Z}^{d}\) and integer \(b\geq 2d\),
\[\lvert\{A\in\mathbb{B}_{d}\colon\,v\in A,\,\lvert\partial A\rvert=b\}\rvert \leq(8d)^{2b/d}. \tag{6.27}\]
Hence, for every \(v_{1},\dots,v_{J}\in\mathbb{Z}^{d}\), \(1\leq j_{0}\leq J\) and integers \((b_{j})_{j_{0}\neq j\in[J]}\in([2d,\infty))^{J-1}\) such that \(\sum_{j_{0}\neq j\in[J]}b_{j}\leq\lambda\),
\[\lvert\{(A_{j})_{j_{0}\neq j\in[J]}\colon\,\forall j_{0}\neq j \in[J]\text{ it holds that }A_{j}\in\mathbb{B}_{d},\,v_{j}\in A_{j},\,\lvert\partial A_{j}\rvert=b_{j} \rvert\leq\prod_{j_{0}\neq j\in[J]}(8d)^{2b_{j}/d}\\ =(8d)^{2\sum_{j_{0}\neq j\in[J]}b_{j}/d}\leq(8d)^{2\lambda/d}.\]
For every shift \(\tau\) it holds that \(\sum_{j_{0}(\tau)\neq j\in[J(\tau)]}\lvert\partial U_{j}(\tau)\rvert\leq\text {TV}(\tau)\) and by (6.10), \(\lvert\partial U_{j}(\tau)\rvert\geq 2d\) for every \(j_{0}(\tau)\neq j\in[J(\tau)]\). Therefore, since by (6.25) (noting that \(d(J-1)<\lambda/2\)) and (6.26) (noting that \(d(J-1)\leq d\rho\)),
\[\lvert\{(v_{j})_{j=1}^{J}\in(\mathbb{Z}^{d})^{J}\colon v_{1}=0,\, \sum_{j=1}^{J-1}\lVert v_{j}-v_{j+1}\rVert_{1}\leq\rho\}\rvert=\lvert\{(y_{j})_ {j=1}^{J-1}\in(\mathbb{Z}^{d})^{J-1}\colon\,\sum_{j=1}^{J-1}\lVert y_{j}\rVert _{1}\leq\rho\}\rvert\\ =\lvert\{(a_{i})_{i=1}^{d(J-1)}\in\mathbb{Z}^{d(J-1)}\colon\,\sum_ {i=1}^{d(J-1)}\lvert a_{i}\rvert\leq\rho\}\rvert\leq\min\left\{\left(6\left( \frac{2\rho}{\lambda}+1\right)\right)^{\lambda/2},(6(d+1))^{\rho}\right\}\\ \leq\min\left\{\left(6\left(\frac{2\rho}{\lambda}+1\right)\right) ^{\lambda/2},(8d)^{\rho}\right\}\]
and for every \(1\leq j_{0}\leq J\), by (6.24),
\[|\{(b_{j})_{j_{0}\neq j\in[J]}\in(\mathbb{Z}\cap[2d,\infty))^{J-1} \colon\sum_{j_{0}\neq j\in[J]}b_{j}\leq\lambda\}|=\binom{\lambda-2d(J-1)+J-1}{J-1} \\ <\left(\frac{3(\lambda-2d(J-1)+J-1)}{J-1}\right)^{J-1}<\left(\frac {3\lambda}{J-1}\right)^{J-1}\leq(6d)^{\frac{\lambda}{2d}}\]
(where the last inequality holds since \(J-1<\frac{\lambda}{2d}\) and the function \(t\mapsto(e\lambda/t)^{t}\) is increasing in the interval \((0,\lambda]\)), we conclude that
\[|\{(j_{0}(\tau),(U_{j}(\tau))_{j_{0}(\tau)\neq j\in[J(\tau)]})\colon \tau\in\tilde{\mathcal{S}}_{J}\}|\\ \leq\frac{\lambda}{2d}\min\left\{\left(6\left(\frac{2\rho}{ \lambda}+1\right)\right)^{\lambda/2},(8d)^{\rho}\right\}(6d)^{\frac{\lambda}{ 2d}}(8d)^{\frac{2\lambda}{d}} \tag{6.28}\]
Additionally, for every \(\tau\), clearly \(\sum_{e\in E(\mathcal{T}(\tau))}|s_{e}(\tau)|\leq\operatorname{TV}(\tau)\). Hence, by using (6.25), since \(J-1<J\leq\frac{\lambda}{2d}\),
\[|\{(s_{e}(\tau))_{e\in E(\mathcal{T}(\tau))}\colon\tau\in\tilde{\mathcal{S}}_{ J}\}|\leq(6(2d+1))^{\frac{\lambda}{2d}}\leq(14d)^{\frac{\lambda}{2d}}\,.\]
Combining this with (6.28) we get that for every \(J\leq\min\{\frac{\lambda}{2d},\rho+1\}\),
\[|\chi(\tilde{\mathcal{S}}_{J})|\leq\min\left\{C_{1}^{\lambda}\left(\frac{2 \rho}{\lambda}+1\right)^{\lambda/2},\left(C_{2}d^{3}\right)^{\lambda/d}(8d)^{ \rho}\right\}\]
for some universal positive constants \(C_{1},C_{2}\), and the result follows by (6.23).
### Concentration of ground energy differences
In this section we prove Lemma 6.1 in the following equivalent formulation.
_There exist universal \(C,c>0\) such that the following holds. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfy (1.11). Then for any shift \(\tau\) and any non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}\),_
\[\mathbb{P}\left(|G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)|\geq t \right)\leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{\parallel})^{2 }b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\right). \tag{6.29}\]
(This is a special case of Lemma 6.1, for \(\tau^{\prime}\equiv 0\), and it implies the lemma in full generality, since \(G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau,\tau^{\prime})=G^{\eta^{ \tau^{\prime}},\Lambda,(b^{\parallel},b^{\perp})}(\tau-\tau^{\prime})\) for any two shifts \(\tau,\tau^{\prime}\).)
We aim to use Corollary 2.3 to show concentration of the ground energy difference under changes in the disorder \(\eta\) induced by shifts. To be able to do so would require approximating the ground energy difference by a function of the disorder on finitely many edges.
To be exact, for \(\Lambda\subset\mathbb{Z}^{d}\) finite we write \(\Delta_{M}:=\Lambda\times\{-M,\ldots,M\}\), and define
\[\Omega^{\Delta_{M},A,(b^{\parallel},b^{\perp})}:=\Omega^{\Delta_{M},\rho^{ \operatorname{Dob}}}\cap\Omega^{\Lambda,A,(b^{\parallel},b^{\perp})}\]
to be the space of configurations on \(\Delta_{M}\) satisfying the Dobrushin boundary conditions and layering bounds \((b^{\parallel},b^{\perp})\) in \(A\subseteq\Lambda\). Let \(\eta:E(\mathbb{Z}^{d+1})\to[0,\infty)\). Denote by
\[\operatorname{GE}^{\Delta_{M},A,(b^{\parallel},b^{\perp})}(\eta):=\min\left\{ \mathcal{H}^{\eta,\Lambda}(\sigma):\sigma\in\Omega^{\Delta_{M},A,(b^{\parallel},b^{\perp})}\right\}\]
and by \(G^{\Delta_{M},(b^{\parallel},b^{\perp})}(\tau)=\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta)-\operatorname{GE}^{ \Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\). Then under the above definitions, the following holds:
**Lemma 6.25**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) be finite, \(\tau\) a shift and non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}\). Then_
\[\lim_{M\to\infty}\mathbb{P}\left(G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp} )}(\tau)=G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)\right)=1.\]
The proof of Lemma 6.25 is routine, and appears in appendix B for completeness.
By the lemma above proving (6.29) may be reduced to the following.
**Proposition 6.26**.: _There exist universal \(C,c>0\) such that the following holds. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfy (1.11). Then for any shift \(\tau\) and non-negative \(b^{\parallel},b^{\perp}\) for which \(\rho^{\operatorname{Dob}}\in\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}\) and every positive integer \(M\),_
\[\mathbb{P}\left(\left|G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp})}(\tau) \right|\geq t\right)\leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{ \parallel})^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}} \right). \tag{6.30}\]
The rest of this section will be devoted to proving Proposition 6.26. Note that condition (1.11) implies that \(\operatorname{wid}(\nu^{\parallel})\neq 0\). Also notice that if \(\operatorname{wid}(\nu^{\perp})=0\) then \(\nu^{\perp}\) is supported on one point, so the value of the coupling field on perpendicular plaquettes is fixed rather than random. This simplifies the argument and hence we will assume that \(\operatorname{wid}(\nu^{\perp})\neq 0\). Let
\[\mathfrak{H} :=\{\eta_{e}\colon e\in E(\mathbb{Z}^{d+1}),\,e\nsubseteq\operatorname {supp}(\tau)\times\mathbb{Z}\},\] \[A_{M} :=\{e\in E(\mathbb{Z}^{d+1})\colon e\subseteq\operatorname{supp}( \tau)\times\{-M,\dots,M\}\},\]
and for every \(e\in A_{M}\), let
\[X_{e}:=\begin{cases}\frac{\eta_{e}}{\operatorname{wid}(\nu^{\parallel})}&e \in E^{\parallel}(\mathbb{Z}^{d+1}),\\ \frac{\eta_{e}}{\operatorname{wid}(\nu^{\perp})}&e\in E^{\perp}(\mathbb{Z}^{d +1}).\end{cases}\]
Conditioned on \(\mathfrak{H}\), the ground energy \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(\eta)\) may be viewed as a function of \(\{X_{e}\}_{e\in A_{M}}\). Moreover, it is easy to verify that it is quasi-concave. Therefore, the following lemma will allow us to apply Corollary 2.3 to it.
**Lemma 6.27**.: _The ground energy \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(\eta)\), conditioned on \(\mathfrak{H}\), is Lipschitz, as a function of \(\{X_{e}\}_{e\in A_{M}}\), with Lipschitz constant \(2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{\parallel}+\operatorname{wid }(\nu^{\perp})^{2}b^{\perp}}\)._
Before proving Lemma 6.27, we show how it implies Proposition 6.26. By Lemma 6.27 and Corollary 2.3, \(\mathbb{E}|(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\mid\mathfrak{H})|<\infty\) and for each \(t>0\),
\[\mathbb{P}\left(\left|\left(\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H} \right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H}\right)\right|\geq t\right) \\ \leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{\parallel} )^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\right). \tag{6.31}\]
Similarly, \(\mathbb{E}|(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{H})|<\infty\) and for each \(t>0\),
\[\mathbb{P}\left(\left|\left(\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{H }\right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{H}\right)\right|\geq t\right) \\ \leq C\exp\left(-c\frac{t^{2}}{\operatorname{wid}(\nu^{ \parallel})^{2}b^{\parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}} \right). \tag{6.32}\]
Observe that the following holds, by linearity of expectation and the facts that the disorder is independent and \(\eta^{\tau}\) has the same distribution as \(\eta\).
**Observation 6.28**.: _It holds that_
\[\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\mid\mathfrak{H}\right)=\mathbb{E}\left( \operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(\eta^{\tau})\mid\mathfrak{H}\right).\]
Observation 6.28 implies that for every \(t>0\),
\[\mathbb{P}\left(\left|\left(G^{\eta,\Delta_{M},(b^{\parallel},b^{ \perp})}\mid\mathfrak{H}\right)\right|\geq t\right)\] \[\qquad\leq\mathbb{P}\left(\left|\left(\operatorname{GE}^{\Delta_ {M},\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H }\right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M},\operatorname{supp}( \tau),(b^{\parallel},b^{\perp})}(\eta)\mid\mathfrak{H}\right)\right|\geq t/2\right)\] \[\qquad\qquad+\mathbb{P}\left(\left|\left(\operatorname{GE}^{ \Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau}) \mid\mathfrak{H}\right)-\mathbb{E}\left(\operatorname{GE}^{\Delta_{M}, \operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}(\eta^{\tau})\mid\mathfrak{ H}\right)\right|\geq t/2\right)\]
and (6.30) follows by (6.31) and (6.32).
Proof of Lemma 6.27.: For \(y\in[0,\infty)^{A_{M}}\) and \(\tilde{y}\in[0,\infty)^{E(\mathbb{Z}^{d+1})\setminus A_{M}}\) let \(h(y,\tilde{y}):E(\mathbb{Z}^{d+1})\to[0,\infty)\) be defined by
\[\left(h(y,\tilde{y})\right)_{e}=\begin{cases}\operatorname{wid}(\nu^{\parallel })y_{e}&e\in A_{M}\cap E^{\parallel}(\mathbb{Z}^{d+1}),\\ \operatorname{wid}(\nu^{\perp})y_{e}&e\in A_{M}\cap E^{\perp}(\mathbb{Z}^{d+1 }),\\ \tilde{y}&e\in E(\mathbb{Z}^{d+1})\setminus A_{M}.\end{cases}\]
We need to verify that for any \(\tilde{y}\in[0,\infty)^{E(\mathbb{Z}^{d})\setminus A_{M}}\) and \(y,y^{\prime}\in[0,\infty)^{A_{M}}\) it holds that
\[|\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h)-\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(h^{\prime})|\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2 \parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2},\]
where \(h:=h(y,\tilde{y})\) and \(h^{\prime}:=h(y^{\prime},\tilde{y})\). Let \(\sigma^{\prime}\) be (some) ground configuration in \(\Omega^{\Lambda,\operatorname{supp}(\tau),(b^{\parallel},b^{\perp})}\) with respect to the coupling field \(h^{\prime}\). Then,
\[\mathcal{H}^{h,\Lambda}(\sigma^{\prime})-\mathcal{H}^{h^{\prime },\Lambda}(\sigma^{\prime})\leq \sum_{\{x,y\}\in A_{M}}|h_{\{x,y\}}-h^{\prime}_{\{x,y\}}|\left(1- \sigma^{\prime}_{x}\sigma^{\prime}_{y}\right)\] \[= \operatorname{wid}(\nu^{\parallel})\sum_{\{x,y\}\in A_{M}\cap E^{ \parallel}(\mathbb{Z}^{d+1})}|y_{\{x,y\}}-y^{\prime}_{\{x,y\}}|\left(1-\sigma ^{\prime}_{x}\sigma^{\prime}_{y}\right)+\] \[\operatorname{wid}(\nu^{\perp})\sum_{\{x,y\}\in A_{M}\cap E^{ \perp}(\mathbb{Z}^{d+1})}|y_{\{x,y\}}-y^{\prime}_{\{x,y\}}|\left(1-\sigma^{ \prime}_{x}\sigma^{\prime}_{y}\right)\] \[\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{\parallel}+ \operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2},\]
where the last inequality is by the Cauchy-Schwarz inequality (and the layering bound on \(\sigma^{\prime}\) deriving from the fact \(\sigma^{\prime}\in\Omega^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b ^{\perp})}\)). Since \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h)\leq\mathcal{H}^{h,\Lambda}(\sigma^{\prime})\) and \(\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h^{\prime})=\mathcal{H}^{h^{\prime},\Lambda}(\sigma^{\prime})\), it follows that
\[\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h)-\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(h^{\prime})\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{ \parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2}.\]
A symmetric inequality of the form
\[\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{\parallel},b^{ \perp})}(h^{\prime})-\operatorname{GE}^{\Delta_{M},\operatorname{supp}(\tau),(b^{ \parallel},b^{\perp})}(h)\leq 2\sqrt{\operatorname{wid}(\nu^{\parallel})^{2}b^{ \parallel}+\operatorname{wid}(\nu^{\perp})^{2}b^{\perp}}\,\|y-y^{\prime}\|_{2}.\]
holds with identical reasoning, and so we are done.
### Layering bounds
In this section we prove Lemma 6.2. Fix positive \(\alpha^{\parallel},\alpha^{\perp}\) and a finite set \(\Lambda\subset\mathbb{Z}^{d}\). Recall the definitions of parallel and perpendicular layering in (6.1) and (6.2). Introduce the following convenient notation: For \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\), \(\theta\in\{\parallel,\perp\}\) write \(\mathcal{L}_{E}^{\theta}(\eta):=\mathcal{L}_{E}^{\theta}(\sigma^{\eta,\Lambda,\mathrm{Dob}})\) where \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is the unique ground configuration, existing by Lemma 1.5. We will use the following proposition, which will be proved in Section 7.
**Proposition 6.29**.: _Let \(E\subset\Lambda\) and let \(\eta\in D(\alpha^{\parallel},\alpha^{\perp})\) be a coupling field. Then, there exists a shift function \(\tau\) satisfying the following:_
\[G^{\eta,\Lambda}(\tau) \geq 2\alpha^{\parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta)-|E |\right)+2\alpha^{\perp}\max\{\mathcal{L}_{E}^{\perp}(\eta),\mathrm{TV}(\tau)\},\] \[G^{\eta,\Lambda}(\tau) \geq\frac{1}{16}\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left( R(\tau)-R(E)-2(|E|-1)\right).\]
Fix any coupling field \(\eta\in D(\alpha^{\parallel},\alpha^{\perp})\). For brevity, we will once more write \(G\) for \(G^{\eta,\Lambda}\), \(\mathcal{AS}\) for \(\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\), _admissible_ for \((\alpha^{\parallel},\alpha^{\perp})\)-admissible and MG for \(\mathrm{MG}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\).
The following proposition establishes a useful general layering bound.
**Proposition 6.30**.: _Let \(E\subset\Lambda\), and \(\tau\) a shift. The following layering bound holds:_
\[\alpha^{\perp}\mathcal{L}_{E}^{\perp}(\eta^{\tau})+\alpha^{ \parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{\tau})-|E|\right)\\ \leq\max\left\{\mathrm{MG},\,2\alpha^{\perp}\,\mathrm{TV}\left( \tau\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R\left(E \right)+|E|\right)\right\}. \tag{6.33}\]
Proof.: By Proposition 6.29, applied to the set \(E\) and the shifted disorder \(\eta^{\tau}\), there is a shift \(\tau^{\prime}\) such that
\[G^{\eta^{\tau},\Lambda}(\tau^{\prime}) \geq 2\alpha^{\parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{ \tau})-|E|\right)+2\alpha^{\perp}\max\{\mathcal{L}_{E}^{\perp}(\eta^{\tau}), \mathrm{TV}(\tau^{\prime})\},\] \[G^{\eta^{\tau},\Lambda}(\tau^{\prime}) \geq\frac{1}{16}\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left( R(\tau^{\prime})-R(E)-2(|E|-1)\right).\]
Note that
\[G^{\eta^{\tau},\Lambda}(\tau^{\prime}) =\mathrm{GE}^{\Lambda}(\eta^{\tau})-\mathrm{GE}^{\Lambda}\left( (\eta^{\tau})^{\tau^{\prime}}\right)=\mathrm{GE}^{\Lambda}(\eta^{\tau})- \mathrm{GE}^{\Lambda}(\eta^{\tau+\tau^{\prime}})\] \[=\left(\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta^{ \tau+\tau^{\prime}})\right)-\left(\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{ \Lambda}(\eta^{\tau})\right)=G(\tau+\tau^{\prime})-G(\tau)\] \[\leq 2\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\}.\]
Hence,
\[\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\} \geq\alpha^{\parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{ \tau})-|E|\right)+\alpha^{\perp}\max\{\mathcal{L}_{E}^{\perp}(\eta^{\tau}), \mathrm{TV}(\tau^{\prime})\}, \tag{6.34}\] \[\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\} \geq\frac{1}{32}\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left( R(\tau^{\prime})-R(E)-2(|E|-1)\right). \tag{6.35}\]
By way of contradiction, assume that (6.33) does not hold. Then, by (6.34),
\[\max\{G(\tau+\tau^{\prime}),\,|G(\tau)|\}\\ >\max\left\{\mathrm{MG},\,2\alpha^{\perp}\,\mathrm{TV}\left(\tau \right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R\left(E \right)+|E|\right)\right\}. \tag{6.36}\]
If \(|G(\tau)|\geq G(\tau+\tau^{\prime})\) then by (6.36), \(|G(\tau)|>2\alpha^{\perp}\,\mathrm{TV}\left(\tau\right)\), \(|G(\tau)|>2\min\{\alpha^{\parallel},\alpha^{\perp}\}dR\left(\tau\right)\) and \(|G(\tau)|>\mathrm{MG}\), hence \(\tau\) is admissible and of larger energetic gain than MG, a contradiction.
Assume that \(G(\tau+\tau^{\prime})>|G(\tau)|\). By Lemma 6.24, (6.35) and (6.34),
\[R(\tau+\tau^{\prime}) \leq 2R(\tau)+R(\tau^{\prime})+\frac{98}{d}\left(\operatorname{TV}( \tau)+\operatorname{TV}(\tau^{\prime})\right)\] \[\leq 2R(\tau)+R(E)+2|E|+\frac{32\,G(\tau+\tau^{\prime})}{\min\left\{ \alpha^{\parallel},\alpha^{\perp}\right\}d}+\frac{98}{d}\operatorname{TV}( \tau)+\frac{98\,G(\tau+\tau^{\prime})}{\alpha^{\perp}d}\] \[\leq 100\left(\frac{\operatorname{TV}(\tau)}{d}+R(\tau)+R(E)+|E| \right)+\frac{130\,G(\tau+\tau^{\prime})}{\min\left\{\alpha^{\parallel},\alpha ^{\perp}\right\}d}\]
and hence, by (6.36), \(R(\tau+\tau^{\prime})<\frac{180}{\min\left\{\alpha^{\parallel},\alpha^{\perp} \right\}d}G(\tau+\tau^{\prime})\), and by Observation 6.23, (6.36) and (6.34),
\[\operatorname{TV}(\tau+\tau^{\prime})\leq\operatorname{TV}(\tau)+ \operatorname{TV}(\tau^{\prime})<\frac{G(\tau+\tau^{\prime})}{2\alpha^{\perp} }+\frac{G(\tau+\tau^{\prime})}{\alpha^{\perp}}<\frac{2}{\alpha^{\perp}}G(\tau +\tau^{\prime}).\]
Therefore, \(\tau+\tau^{\prime}\) is admissible and of larger energetic gain than \(\operatorname{MG}\), by (6.36), a contradiction.
Now we use Proposition 6.30 to prove Lemma 6.2.
Proof of Lemma 6.2.: First note that for every set \(E\subseteq\Lambda\) and every coupling field \(\tilde{\eta}\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\), it holds that \(\operatorname{GE}^{\Lambda}(\tilde{\eta})=\operatorname{GE}^{\Lambda,E,(b^{ \parallel},b^{\perp})}(\tilde{\eta})\) if and only if \(\mathcal{L}_{E}^{\perp}(\tilde{\eta})\leq b^{\perp}\) and \(\mathcal{L}_{E}^{\parallel}(\tilde{\eta})\leq b^{\parallel}\). Also note that \(\eta^{\tau}\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for every shift \(\tau\). Throughout the proof we will use \(C\) to denote a positive absolute constant; the values of this constant will be allowed to change from line to line, even within the same calculation, with its value increasing.
To prove the third part of the lemma, we use Proposition 6.30 twice, for the same subset \(E:=\operatorname{supp}(\tau)\). Recall that, by the admissibility of \(\tau\),
\[\operatorname{TV}(\tau) \leq\frac{2}{\alpha^{\perp}}|G(\tau)|\leq\frac{2}{\alpha^{\perp }}\operatorname{MG}\leq\frac{4s}{\alpha^{\perp}}, \tag{6.37}\] \[R(\tau) \leq\frac{200}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d}|G( \tau)|\leq\frac{200}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d} \operatorname{MG}\leq\frac{400s}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d}. \tag{6.38}\]
Hence, \(R(E)\leq R(\tau)\leq\frac{Cs}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d}\) and by Lemma 6.9 and the assumption that \(s<\alpha^{\perp}4^{d}\),
\[|E|\leq\sum_{u\in\mathbb{Z}^{d}}|\tau(u)|\leq\left(\frac{\operatorname{TV}( \tau)}{2d}\right)^{\frac{d}{d-1}}\leq\left(\frac{2s}{\alpha^{\perp}d}\right)^ {\frac{d}{d-1}}\leq\frac{Cs}{\alpha^{\perp}d}. \tag{6.39}\]
The first use of Proposition 6.30 will be for the shift \(\tau\), and it gives
\[\alpha^{\perp}\mathcal{L}_{E}^{\perp}(\eta^{\tau})+\alpha^{ \parallel}\left(\mathcal{L}_{E}^{\parallel}(\eta^{\tau})-|E|\right)\\ \leq\max\left\{\operatorname{MG},2\alpha^{\perp}\operatorname{ TV}\left(\tau\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R(E)+|E| \right)\right\}\leq Cs\]
and the second use will be for the shift \(\tau_{0}\equiv 0\), and it gives
\[\alpha^{\perp}\mathcal{L}_{E}^{\perp}(\eta)+\alpha^{\parallel}\left(\mathcal{ L}_{E}^{\parallel}(\eta)-|E|\right)\leq\max\left\{\operatorname{MG},2\min\{ \alpha^{\parallel},\alpha^{\perp}\}d\left(R(E)+|E|\right)\right\}\leq Cs.\]
Hence, for \(\#\in\{\eta^{\tau},\eta\}\), by using (6.39),
\[\mathcal{L}_{E}^{\perp}(\#)\leq\frac{Cs}{\alpha^{\perp}},\quad\mathcal{L}_{E} ^{\parallel}(\#)\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{ \perp}d}\right)s.\]
We proceed to prove the second part of the lemma. For every positive integer \(k\), let \(E_{k}:=\operatorname{supp}\left(\tau_{2^{k}}-\tau_{2^{k+1}}\right)\). By Proposition 6.6,
\[\operatorname{TV}(\tau_{2^{k}})\leq 10d\,\operatorname{TV}(\tau)\leq\frac{Cds}{ \alpha^{\perp}}, \tag{6.40}\]
and by (6.20),
\[R(\tau_{2^{k}})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau)\leq\frac{Cs}{ \min\{\alpha^{\parallel},\alpha^{\perp}\}d}. \tag{6.41}\]
Hence, by Lemma 6.24,
\[R\left(E_{k}\right)\leq R\left(\tau_{2^{k}}-\tau_{2^{k+1}}\right)\leq R\left( \tau_{2^{k}}\right)+2R\left(\tau_{2^{k+1}}\right)+\frac{98}{d}\left( \operatorname{TV}\left(\tau_{2^{k}}\right)+\operatorname{TV}\left(\tau_{2^{k +1}}\right)\right)\leq\frac{Cs}{\min\{\alpha^{\parallel},\alpha^{\perp}\}},\]
and by Proposition 6.7,
\[|E_{k}|\leq\|\tau_{2^{k}}-\tau_{2^{k+1}}\|_{1}\leq(4d+9)2^{k} \,\operatorname{TV}(\tau)\leq\frac{C2^{k}ds}{\alpha^{\perp}}. \tag{6.42}\]
Therefore, by Proposition 6.30,
\[\alpha^{\perp}\mathcal{L}_{E_{k}}^{\perp}(\eta^{\tau_{2^{k}}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{k}}^{\parallel}(\eta^{\tau_{2^{k}}})-| E_{k}|\right)\\ \leq\max\left\{\operatorname{MG},\,2\alpha^{\perp}\operatorname{ TV}\left(\tau_{2^{k}}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R( \tau_{2^{k}})+R\left(E_{k}\right)+|E_{k}|\right)\right\}\leq C2^{k}d^{2}s\]
and
\[\alpha^{\perp}\mathcal{L}_{E_{k-1}}^{\perp}(\eta^{\tau_{2^{k}}}) +\alpha^{\parallel}\left(\mathcal{L}_{E_{k-1}}^{\parallel}(\eta^{\tau_{2^{k}}} )-|E_{k-1}|\right)\\ \leq\max\left\{\operatorname{MG},\,2\alpha^{\perp}\operatorname{ TV}\left(\tau_{2^{k}}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R( \tau_{2^{k}})+R\left(E_{k-1}\right)+|E_{k-1}|\right)\right\}\leq C2^{k}d^{2}s.\]
Hence, for \(\#\in\{k-1,k\}\), by using (6.42),
\[\mathcal{L}_{E_{\#}}^{\perp}(\tau^{2^{k}})\leq\frac{C2^{k}d^{2}s}{\alpha^{ \perp}},\quad\mathcal{L}_{E_{\#}}^{\parallel}(\tau_{2^{k}})\leq\frac{C2^{k}d^{ 2}s}{\alpha^{\parallel}}+|E_{\#}|\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac {1}{\alpha^{\perp}d}\right)d^{2}2^{k}s.\]
To prove the first part of the Lemma, fix \(\emptyset\neq I\in\operatorname{comp}(\tau)\), and let \(E_{(0,I)}:=\operatorname{supp}\left(\tau-\tau_{I}\right)\). By the compatibility of \(I\) and (6.37),
\[\operatorname{TV}(\tau_{I})\leq 20(2|I|+1)\operatorname{TV}(\tau)\leq\frac{C|I |s}{\alpha^{\perp}}, \tag{6.43}\]
and by (6.21), (6.37) and (6.38),
\[R(\tau_{I})\leq R(\tau)+\frac{C}{d}\operatorname{TV}(\tau)\leq\frac{Cs}{\min \{\alpha^{\parallel},\alpha^{\perp}\}d}, \tag{6.44}\]
and hence, by Lemma 6.24,
\[R\left(E_{(0,I)}\right)\leq R\left(\tau-\tau_{I}\right)\leq R\left(\tau\right) +2R\left(\tau_{I}\right)+\frac{100}{d}\left(\operatorname{TV}\left(\tau\right) +\operatorname{TV}\left(\tau_{I}\right)\right)\leq\frac{C|I|s}{\min\{\alpha^{ \parallel},\alpha^{\perp}\}d}, \tag{6.45}\]
and by the compatibility of \(I\),
\[|E_{(0,I)}|\leq\|\tau-\tau_{I}\|_{1}\leq\frac{4|I|}{d}\operatorname{TV}(\tau) \leq\frac{C|I|s}{\alpha^{\perp}d}. \tag{6.46}\]
Therefore, by Proposition 6.30,
\[\alpha^{\perp}\mathcal{L}_{E_{(0,I)}}^{\perp}(\eta^{\tau})+\alpha^{ \parallel}\left(\mathcal{L}_{E_{(0,I)}}^{\parallel}(\eta^{\tau})-|E_{(0,I)}|\right) \\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV}\left( \tau\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau)+R\left(E_ {(0,I)}\right)+|E_{(0,I)}|\right)\right\}\leq C|I|s\]
and
\[\alpha^{\perp}\mathcal{L}_{E_{(0,I)}}^{\perp}(\eta^{\tau_{I}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{(0,I)}}^{\parallel}(\eta^{\tau_{I}})- |E_{(0,I)}|\right)\\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV}\left( \tau_{I}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau_{I})+R \left(E_{(0,I)}\right)+|E_{(0,I)}|\right)\right\}\leq C|I|s.\]
Hence, for \(\#\in\{\eta^{\tau},\eta^{\tau_{I}}\}\), by using (6.46),
\[\mathcal{L}_{E_{(0,I)}}^{\perp}(\#)\leq\frac{C|I|s}{\alpha^{\perp}},\quad \mathcal{L}_{E_{(0,I)}}^{\parallel}(\#)\leq\frac{C|I|s}{\alpha^{\parallel}}+|E _{(0,I)}|\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{\perp}d} \right)|I|s.\]
Finally, let \(E_{(I,1)}:=\operatorname{supp}\left(\tau_{I}-\tau_{2}\right)\) and note that by Lemma 6.24, (6.44), (6.41), (6.43) and (6.40),
\[R\left(E_{(I,1)}\right)\leq R\left(\tau_{I}-\tau_{2}\right)\leq R\left(\tau_{ I}\right)+2R\left(\tau_{2}\right)+\frac{98}{d}\left(\operatorname{TV}\left( \tau_{I}\right)+\operatorname{TV}\left(\tau_{2}\right)\right)\leq\frac{Cs}{ \min\{\alpha^{\parallel},\alpha^{\perp}\}},\]
and by the compatibility of \(I\), Proposition 6.7 and (6.37),
\[|E_{(I,1)}|\leq\|\tau_{I}-\tau_{2}\|_{1}\leq\|\tau_{I}-\tau\|_{1}+\|\tau_{2}- \tau\|_{1}\leq\frac{4|I|}{d}\operatorname{TV}(\tau)+4\operatorname{TV}(\tau) \leq\frac{Cs}{\alpha^{\perp}}. \tag{6.47}\]
Therefore, by Proposition 6.30,
\[\alpha^{\perp}\mathcal{L}_{E_{(I,1)}}^{\perp}(\eta^{\tau_{I}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{(I,1)}}^{\parallel}(\eta^{\tau_{I}})-| E_{(I,1)}|\right)\\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV} \left(\tau_{I}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau_ {I})+R\left(E_{(I,1)}\right)+|E_{(I,1)}|\right)\right\}\leq Cds\]
and
\[\alpha^{\perp}\mathcal{L}_{E_{(I,1)}}^{\perp}(\eta^{\tau_{2}})+ \alpha^{\parallel}\left(\mathcal{L}_{E_{(I,1)}}^{\parallel}(\eta^{\tau_{2}})- |E_{(I,1)}|\right)\\ \leq\max\left\{\text{MG},\,2\alpha^{\perp}\operatorname{TV} \left(\tau_{2}\right)+2\min\{\alpha^{\parallel},\alpha^{\perp}\}d\left(R(\tau_ {2})+R\left(E_{(I,1)}\right)+|E_{(I,1)}|\right)\right\}\leq Cds.\]
Hence, for \(\#\in\{\eta^{\tau_{I}},\eta^{\tau_{2}}\}\), by using (6.47),
\[\mathcal{L}_{E_{(I,1)}}^{\perp}(\#)\leq\frac{Cds}{\alpha^{\perp}},\quad \mathcal{L}_{E_{(I,1)}}^{\parallel}(\#)\leq\frac{Cds}{\alpha^{\parallel}}+|E_{ (I,1)}|\leq C\left(\frac{1}{\alpha^{\parallel}}+\frac{1}{\alpha^{\perp}d} \right)ds.\qed\]
## 7. Obtaining admissible shifts from interfaces
The goal of this section will be to prove Proposition 6.29 and Lemma 4.4.
Fix \(\eta\in\mathcal{D}(\alpha^{\parallel},\alpha^{\perp})\) for some \(\alpha^{\parallel},\alpha^{\perp}>0\) and let \(\Lambda\subset\mathbb{Z}^{d}\) be finite. Recall the definition of the configuration space for semi-infinite-volume under Dobrushin boundary conditions \(\Omega^{\Lambda,\text{Dob}}\) in (4.4), the Hamiltonian \(\mathcal{H}^{\eta,\Lambda}\) in (4.5), and of the ground energy with respect to it \(\operatorname{GE}^{\Lambda,\eta}\) in (4.6). Recall as well the definition of parallel and perpendicular layering in (6.1) and (6.2).
### Defining \(\tau_{0}\)
Let \(E\subseteq\Lambda\) and let \(\sigma_{0}\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) be a configuration in \(\Omega^{\Lambda,\mathrm{Dob}}\).
Define a function \(I_{\sigma_{0}}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\,\text{``layered''}\}\) as follows. For each \(v\in\mathbb{Z}^{d}\), if \(\sigma_{0}\) has a unique sign change above \(v\), then set \(I_{\sigma_{0}}(v)\) to be the location of this sign change (precisely, if \(\sigma_{0}(v,k)=-1\) and \(\sigma_{0}(v,k+1)=1\) then set \(I_{\sigma_{0}}(v)=k\)). If \(\sigma_{0}\) has more than one sign change above \(v\) then set \(I_{\sigma_{0}}(v)=\text{``layered''}\).
Define a graph \(G_{\sigma_{0}}\) to be the induced subgraph of \(\mathbb{Z}^{d}\) on the vertex set \(V_{\sigma_{0}}\), where \(V_{\sigma_{0}}\subset\mathbb{Z}^{d}\) is defined to be the set of vertices \(v\) satisfying that there exists a neighbor \(u\sim v\) (in the usual connectivity of \(\mathbb{Z}^{d}\)) such that either \(I_{\sigma_{0}}(u)\neq I_{\sigma_{0}}(v)\) or \(I_{\sigma_{0}}(u)=I_{\sigma_{0}}(v)=\text{``layered''}\).
Recall from (6.19) the definition of \(\partial_{\text{vis}(w)}(A)\), the outer vertex boundary of a set \(A\subseteq\mathbb{Z}^{d}\) visible from a point \(w\in\mathbb{Z}^{d}\).
**Observation 7.1**.: _For every connected component \(A\) of \(G_{\sigma_{0}}\) and every \(v\in(\mathbb{Z}^{d}\setminus A)\cup\{\infty\}\), there is an integer, which we denote \(\tilde{I}_{\sigma_{0}}(v;A)\), such that \(I_{\sigma_{0}}(u)=\tilde{I}_{\sigma_{0}}(v;A)\) for every \(u\in\partial_{\text{vis}(v)}(A)\)._
Proof.: For any \(u\in\mathbb{Z}^{d}\setminus V_{\sigma_{0}}\), it holds that \(I_{\sigma_{0}}(w)=I_{\sigma_{0}}(u)\), for every \(w\in\mathcal{B}_{1}(u)\). Hence, for every \(u_{1},u_{2}\in\mathbb{Z}^{d}\setminus V_{\sigma_{0}}\) such that \(\|u_{1}-u_{2}\|_{1}\leq 2\), i.e., such that \(\mathcal{B}_{1}(u_{1})\cap\mathcal{B}_{1}(u_{2})\neq\emptyset\), it holds that \(I_{\sigma_{0}}(u_{1})=I_{\sigma_{0}}(u_{2})\).
The claim follows since \(\partial_{\text{vis}(v)}(A)\subseteq\partial^{\text{out}}A\subseteq\mathbb{Z} ^{d}\setminus V_{\sigma_{0}}\) and the set \(\partial_{\text{vis}(v)}(A)\) is \(\ell_{1}^{+}\)-connected, by Lemma 6.19.
For every \(A\subseteq\mathbb{Z}^{d}\), let
\[\text{in}(A):=\left\{u\in\mathbb{Z}^{d}\setminus A\colon\text{every path from $u$ to $\infty$ intersects $A$}\right\}.\]
**Lemma 7.2**.: _Let \(A_{1},A_{2}\subseteq\mathbb{Z}^{d}\) be nonempty connected sets such that \(\operatorname{dist}(A_{1},A_{2})>1\)._
1. _If_ \(\operatorname{dist}(A_{1}\cup\text{in}(A_{1}),A_{2}\cup\text{in}(A_{2}))\leq 1\)_, then_ \((A_{1}\cup\text{in}(A_{1}))\cap(A_{2}\cup\text{in}(A_{2}))\neq\emptyset\)_._
2. _If_ \((A_{1}\cup\text{in}(A_{1}))\cap(A_{2}\cup\text{in}(A_{2}))\neq\emptyset\) _then_ \(A_{1}\subseteq\text{in}(A_{2})\) _or_ \(A_{2}\subseteq\text{in}(A_{1})\)_._
3. _If_ \(A_{1}\subseteq\text{in}(A_{2})\) _then_ \(\text{in}(A_{1})\subsetneq\text{in}(A_{2})\)_. (Similarly, if_ \(A_{2}\subseteq\text{in}(A_{1})\) _then_ \(\text{in}(A_{2})\subsetneq\text{in}(A_{1})\)_.)_
Proof.: We first show that the first statement holds. There are \(u_{1}\in A_{1}\cup\text{in}(A_{1})\) and \(u_{2}\in A_{2}\cup\text{in}(A_{2})\) such that \(\|u_{1}-u_{2}\|_{1}\leq 1\). If \(u_{1}=u_{2}\) there is nothing to prove, hence we assume that \(\|u_{1}-u_{2}\|_{1}=1\), i.e., \(u_{1}\sim u_{2}\). Since \(\operatorname{dist}(A_{1},A_{2})>1\), necessarily \(u_{1}\notin A_{1}\) or \(u_{2}\notin A_{2}\). With no loss of generality assume that \(u_{1}\notin A_{1}\). Hence, \(u_{1}\in\text{in}(A_{1})\). If \(P\) is a path from \(u_{2}\) to \(\infty\), then starting at \(u_{1}\) and continuing along \(P\) is a path from \(u_{1}\in\text{in}(A_{1})\) to \(\infty\), therefore it must intersect \(A_{1}\); hence, since \(u_{1}\notin A_{1}\), the path \(P\) necessarily intersects \(A_{1}\). Therefore, every path from \(u_{2}\) to \(\infty\) intersects \(A_{1}\), i.e., \(u_{2}\in A_{1}\cup\text{in}(A_{1})\).
To prove the second statement, assume by contradiction that there are \(a_{1}\in A_{1}\setminus\text{in}(A_{2})\) and \(a_{2}\in A_{2}\setminus\text{in}(A_{1})\). Consider an arbitrary path \(P_{0}\) from an arbitrary vertex \(u_{0}\in(A_{1}\cup\text{in}(A_{1}))\cap(A_{2}\cup\text{in}(A_{2}))\) to \(\infty\). The path \(P_{0}\) necessarily intersects both \(A_{1}\) and \(A_{2}\). Let \(a\) be the first intersection point of \(P_{0}\) with \(A_{1}\cup A_{2}\). With no loss of generality we may assume that \(a\in A_{1}\). Since \(A_{1}\) is connected, there is a path \(P_{1}\) in \(A_{1}\) from \(a\) to \(a_{1}\). Since \(a_{1}\notin\text{in}(A_{2})\), there is a path \(P_{2}\) from \(a_{1}\) to \(\infty\) that does not intersect \(A_{2}\). Then, the path that is obtained by taking \(P_{0}\) up to the point \(a\), then \(P_{1}\) and then \(P_{2}\) is a path from \(u_{0}\in A_{2}\cup\text{in}(A_{2})\) to \(\infty\) which does not intersect \(A_{2}\), and we get a contradiction.
Finally, we show that the last statement holds. If \(P\) is a path from a point of \(\text{in}(A_{1})\) to \(\infty\), then it must intersect \(A_{1}\); let \(a_{1}\) be such an intersecting point; the part of the path \(P\) that starts at \(a_{1}\) is a path from \(a_{1}\in A_{1}\subseteq\text{in}(A_{2})\) to \(\infty\), hence it intersects \(A_{2}\). Therefore,
every path from any point of \(\operatorname{in}(A_{1})\) to \(\infty\) intersects \(A_{2}\), i.e., \(\operatorname{in}(A_{1})\subseteq A_{2}\cup\operatorname{in}(A_{2})\). By way of contradiction assume that \(\operatorname{in}(A_{1})\nsubseteq\operatorname{in}(A_{2})\); then, there is \(a_{2}\in A_{2}\cap\operatorname{in}(A_{1})\); let \(P\) be a path from \(a_{2}\) to \(\infty\) and let \(a\) be the last intersection point of \(P\) with \(A_{1}\cup A_{2}\); if \(a\in A_{1}\), then the part of \(P\) that starts at \(a\) is a path from \(a\) to \(\infty\) that does not intersect \(A_{2}\), contradicting the assumption that \(A_{1}\subseteq\operatorname{in}(A_{2})\); if \(a\in A_{2}\), then since \(A_{2}\) is connected there is a path \(P_{2}\) in \(A_{2}\) from \(a_{2}\) to \(a\) and then, the path that is obtained by taking \(P_{2}\) and then the part of \(P\) that starts at \(a\) is a path from \(a_{2}\in\operatorname{in}(A_{2})\) to \(\infty\) which does not intersect \(A_{2}\), and we get a contradiction. Hence, \(\operatorname{in}(A_{1})\subseteq\operatorname{in}(A_{2})\), and the inclusion is obviously proper, since \(\emptyset\neq A_{1}\subseteq\operatorname{in}(A_{2})\setminus\operatorname{in} (A_{1})\).
Let \(\mathcal{C}\) be the collection of all connected components of \(G_{\sigma_{0}}\), let
\[\mathcal{A}=\{A\in\mathcal{C}\colon E\cap(A\cup\operatorname{in}(A))\neq \emptyset\}, \tilde{A}:=\bigcup_{A\in\mathcal{A}}A,\]
and let
\[B_{\infty}:=\Big{\{}u\in\mathbb{Z}^{d}\colon\text{there is a path from $u$ to $\infty$ that does not intersect $\tilde{A}$}\Big{\}}\]
be the unique infinite connected component of \(\mathbb{Z}^{d}\setminus\tilde{A}\). For \(v\in\mathbb{Z}^{d}\setminus\Big{(}B_{\infty}\cup\tilde{A}\Big{)}\), the second and third parts of Lemma 7.2 imply that there exists \(A_{v}\in\mathcal{A}\) such that \(v\in\operatorname{in}(A_{v})\) and \(\operatorname{in}(A_{v})\subsetneq\operatorname{in}(A)\) for any other \(A\in\mathcal{A}\) for which \(v\in\operatorname{in}(A)\).
**Lemma 7.3**.: _For every \(v\in\partial^{\operatorname{out}}\tilde{A}\),_
\[I_{\sigma_{0}}(v)=\begin{cases}0&v\in B_{\infty},\\ \tilde{I}_{\sigma_{0}}(v;A_{v})&v\notin B_{\infty}.\end{cases}\]
Proof.: Let \(\mathcal{A}_{v}:=\{A\in\mathcal{C}\colon v\in\operatorname{in}(A)\}\) and let \(S_{v}:=\bigcup_{A\in\mathcal{C}\setminus\mathcal{A}_{v}}(A\cup\operatorname{ in}(A))\). Note that \(v\notin S_{v}\).
If \(v\in B_{\infty}\) then there is a path from \(v\) to \(\infty\) that does not intersect \(\tilde{A}\), and since \(I_{\sigma_{0}}=0\) at all but finitely many points of \(\mathbb{Z}^{d}\), and hence the set \(S_{v}\) is finite, this path eventually reaches a point in \(\mathbb{Z}^{d}\setminus S_{v}\) where \(I_{\sigma_{0}}=0\).
If \(v\notin B_{\infty}\) then any path from \(v\) to \(\infty\) intersects \(A_{v}\), and the last point in any such path before it first meets \(A_{v}\) is necessarily in \(\partial_{\operatorname{viz}(v)}A_{v}\subseteq\mathbb{Z}^{d}\setminus S_{v}\).
In any case, there is a path \(v_{0},v_{1},\dots,v_{N}\) of points in \(\mathbb{Z}^{d}\setminus\bigcup_{A\in\mathcal{A}_{v}}A\) such that \(v_{i-1}\sim v_{i}\) for every \(1\leq i\leq N\), \(v_{0}=v\), \(v_{N}\notin S_{v}\) and
\[I_{\sigma_{0}}(v_{N})=\begin{cases}0&v\in B_{\infty},\\ \tilde{I}_{\sigma_{0}}(v;A_{v})&v\notin B_{\infty}.\end{cases}\]
Let \(I:=\{0\leq i\leq N\colon v_{i}\notin S_{v}\}\). Note that \(v_{i}\notin V_{\sigma_{0}}\) for every \(i\in I\), hence \(I_{\sigma_{0}}(v_{i-1})=I_{\sigma_{0}}(v_{i})\) for every \(1\leq i\leq N\) such that \(\{i-1,i\}\cap I\neq\emptyset\).
Note that \(0\in I\) and \(N\in I\), hence the set \(\{0,1,\dots,N\}\setminus I\) may be presented as \(\bigcup_{j=1}^{r}\{a_{j},a_{j}+1,\dots,b_{j}-1\}\), where \(1\leq a_{1}<b_{1}<a_{2}<b_{2}<\dots<a_{r}<b_{r}\leq N\). For every \(1\leq j\leq r\), Lemma 7.2 implies that there is \(A_{j}\in\mathcal{C}\setminus\mathcal{A}_{v}\) such that \(v_{i}\in A_{j}\cup\operatorname{in}(A_{j})\) for every \(a_{j}<i<b_{j}-1\); then obviously \(v_{a_{j}-1},v_{b_{j}}\in\partial^{\operatorname{out}}(A_{j}\cup\operatorname{ in}(A)j))=\partial_{\operatorname{vis}(\infty)}A_{j}\) and hence \(I_{\sigma_{0}}(v_{a_{j}-1})=\tilde{I}_{\sigma_{0}}(\infty;A)=I_{\sigma_{0}}(v_{b_ {j}})\), by Observation 7.1.
Hence, \(I_{\sigma_{0}}(v_{0})=I_{\sigma_{0}}(v_{N})\) and the claim follows.
Define a "pre-shift" \(\tau_{0}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered"}\}\) as follows.
\[\tau_{0}(v):=\begin{cases}0&v\in B_{\infty},\\ I_{\sigma_{0}}(v)&v\in\tilde{A},\\ \tilde{I}_{\sigma_{0}}(v;A_{v})&\text{otherwise}.\end{cases}\]
**Observation 7.4**.: _For every connected component \(B\) of \(\mathbb{Z}^{d}\setminus\tilde{A}\), the function \(\tau_{0}\) is constant on the set \(B\cup\partial^{\operatorname{out}}B\)._
Proof.: Let \(v\in\partial^{\operatorname{out}}B\). Necessarily \(v\in\tilde{A}\) and hence \(\tau_{0}(v)=I_{\sigma_{0}}(v)\). There is \(u\in\partial^{\operatorname{in}}B\) such that \(u\sim v\). Necessarily \(u\in\partial^{\operatorname{out}}\tilde{A}\subseteq\mathbb{Z}^{d}\setminus \tilde{A}\subseteq\mathbb{Z}^{d}\setminus V_{\sigma_{0}}\), therefore \(I_{\sigma_{0}}(v)=I_{\sigma_{0}}(u)\), and by Lemma 7.3, \(I_{\sigma_{0}}(u)=\tau_{0}(u)\). Hence, \(\tau_{0}(v)=\tau_{0}(u)\).
To obtain the claim it is therefore enough to show that \(\tau_{0}\) is constant on \(B\). By definition, \(\tau_{0}=0\) on \(B_{\infty}\). For every \(u,v\in B\neq B_{\infty}\), clearly \(A_{u}=A_{v}\), and moreover \(\partial_{\operatorname{vis}(u)}(A_{u})=\partial_{\operatorname{vis}(v)}(A_{v})\), and hence \(\tau_{0}(u)=\tau_{0}(v)\).
### Defining \(\tau\)
Now, we turn the "pre-shift" \(\tau_{0}\) into a shift \(\tau\colon\mathbb{Z}^{d}\to\mathbb{Z}\). Define a configuration \(\sigma\) which has a unique sign change above every vertex \(v\) where \(\tau_{0}(v)\) is an integer, and the sign change occurs at height \(\tau_{0}(v)\). The value of \(\sigma\) above vertices \(v\) where \(\tau_{0}(v)=\) "layered" equals the value of \(\sigma_{0}\) at these vertices. Note that, by Observation 7.4,
\[\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap(\Lambda \times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}\\ =\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon x,y\in\tilde{A} \times\mathbb{Z},\,(\sigma_{0})_{x}\neq(\sigma_{0})_{y}\}. \tag{7.1}\]
By Corollary 3.3 there is an interfacial configuration \(\sigma^{\prime}\) that has a unique sign change above every vertex, with \(\sigma\) having the same sign change at the same height, and
\[|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma^{\prime}_{x}\neq\sigma^{\prime }_{y}\}|\\ \leq|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|. \tag{7.2}\]
For every \(v\in\mathbb{Z}^{d}\), let \(\tau(v)\) be the height where \(\sigma^{\prime}\) changes sign over \(v\).
Note that \(\tau(v)=\tau_{0}(v)\) for every \(v\in\mathbb{Z}^{d}\) such that \(\tau_{0}(v)\neq\) "layered", in particular for every \(v\in(\mathbb{Z}^{d}\setminus\tilde{A})\cup\partial^{\operatorname{out}}( \mathbb{Z}^{d}\setminus\tilde{A})\). This observation, combined with Observation 7.4 yields the following useful corollary.
**Corollary 7.5**.: _For every connected component \(B\) of \(\mathbb{Z}^{d}\setminus\tilde{A}\), the function \(\tau\) is constant on the set \(B\cup\partial^{\operatorname{out}}B\)._
### Bounding the total variation of \(\tau\) via a layering bound
This section is devoted to proving the following proposition.
**Proposition 7.6**.: _The shift \(\tau\) as defined above satisfies_
\[\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\operatorname{GE}^{\Lambda}(\eta^{ \tau})\geq 2\alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{\tilde{A}}(\sigma_{ 0})-|\tilde{A}|\right)+2\alpha^{\perp}\mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{ 0}) \tag{7.3}\]
_and consequently,_
\[\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\operatorname{GE}^{\Lambda}(\eta^{ \tau})\geq 2\alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{E}(\sigma_{0})-|E| \right)+2\alpha^{\perp}\max\{\mathcal{L}^{\perp}_{E}(\sigma_{0}),\operatorname{ TV}(\tau)\}. \tag{7.4}\]
For every coupling field \(\tilde{\eta}\colon E(\mathbb{Z}^{d+1})\to[0,\infty)\) and configuration \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\), denote
\[\mathcal{H}^{\tilde{\eta},\Lambda,\parallel}(\sigma) :=2\sum_{\begin{subarray}{c}\{x,y\}\in E^{\parallel}(\mathbb{Z}^{d +1})\\ \{x,y\}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\tilde{\eta}_{\{ x,y\}}1_{\sigma_{x}\neq\sigma_{y}}=2\sum_{u\in\Lambda}\sum_{k\in\mathbb{Z}}\tilde{ \eta}_{\{(u,k),(u,k+1)\}}1_{\sigma_{(u,k)}\neq\sigma_{(u,k+1)}},\] \[\mathcal{H}^{\tilde{\eta},\Lambda,\perp}(\sigma) :=2\sum_{\begin{subarray}{c}\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1} )\\ \{x,y\}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\tilde{\eta}_{\{ x,y\}}1_{\sigma_{x}\neq\sigma_{y}}=2\sum_{\begin{subarray}{c}\{u,v\}\in E( \mathbb{Z}^{d})\\ \{u,v\}\cap\Lambda\neq\emptyset\end{subarray}}\sum_{k\in\mathbb{Z}}\tilde{ \eta}_{\{(u,k),(v,k)\}}1_{\sigma_{(u,k)}\neq\sigma_{(v,k)}}.\]
Define a configuration \(\tilde{\sigma}\in\Omega^{\Lambda,\mathrm{Dob}}\) as follows:
\[\tilde{\sigma}_{(u,k)}=\begin{cases}(\sigma_{0})_{(u,k+s(u))}&u\in\Lambda \setminus\tilde{A},\\ 1&u\in\tilde{A},\,k>0,\\ -1&u\in\tilde{A},\,k\leq 0.\end{cases}\]
**Observation 7.7**.: _If \((u,v)\in\partial\tilde{A}\), i.e., \(u\in\tilde{A}\) and \(v\notin\tilde{A}\), then \(\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+s(u))}\) for all \(k\in\mathbb{Z}\)._
Proof.: Necessarily \(v\notin V_{\sigma_{0}}\) and hence \(\tau_{0}(u)=I_{\sigma_{0}}(u)=I_{\sigma_{0}}(v)\neq\text{``layered''}\). Therefore, \(\tau(u)=\tau_{0}(u)=I_{\sigma_{0}}(u)\). Hence, \((\sigma_{0})_{(u,k)}=1\) for every \(k>\tau(u)\) and \((\sigma_{0})_{(u,k)}=-1\) for every \(k\leq\tau(u)\), i.e., \(\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+\tau(u))}\).
Proposition 7.6 will easily follow from the following lemmas.
**Lemma 7.8**.: _It holds that_
\[\mathcal{H}^{\eta,\Lambda,\parallel}(\sigma_{0})-\mathcal{H}^{\eta^{\tau}, \Lambda,\parallel}(\tilde{\sigma})\geq 2\alpha^{\parallel}\left(\mathcal{L}^{ \parallel}_{\tilde{A}}(\sigma_{0})-|\tilde{A}|\right).\]
**Lemma 7.9**.: _It holds that_
\[\mathcal{H}^{\eta,\Lambda,\perp}(\sigma_{0})-\mathcal{H}^{\eta^{\tau}, \Lambda,\perp}(\tilde{\sigma})\geq 2\alpha^{\perp}\mathcal{L}^{\perp}_{ \tilde{A}}(\sigma_{0}).\]
Before proving Lemmas 7.8 and 7.9, let us show how they imply Proposition 7.6.
Proof of Proposition 7.6.: Combining Lemmas 7.8 and 7.9 yields that
\[\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\mathcal{H}^{\eta^{\tau},\Lambda}(\tilde{\sigma}) =\left(\mathcal{H}^{\eta,\Lambda,\parallel}(\sigma_{0})+\mathcal{H }^{\eta,\Lambda,\perp}(\sigma_{0})\right)-\left(\mathcal{H}^{\eta^{\tau}, \Lambda,\parallel}(\tilde{\sigma})+\mathcal{H}^{\eta^{\tau},\Lambda,\perp}( \tilde{\sigma})\right)\] \[=\left(\mathcal{H}^{\eta,\Lambda,\parallel}(\sigma_{0})-\mathcal{ H}^{\eta^{\tau},\Lambda,\parallel}(\tilde{\sigma})\right)+\left(\mathcal{H}^{ \eta,\Lambda,\perp}(\sigma_{0})-\mathcal{H}^{\eta^{\tau},\Lambda,\perp}( \tilde{\sigma})\right)\] \[\geq 2\alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{\tilde{A}} (\sigma_{0})-|\tilde{A}|\right)+2\alpha^{\perp}\mathcal{L}^{\perp}_{\tilde{A} }(\sigma_{0}),\]
and (7.3) follows since obviously \(\mathrm{GE}^{\Lambda}(\eta^{\tau})\leq\mathcal{H}^{\eta^{\tau},\Lambda}( \tilde{\sigma})\). We proceed to show how (7.3) implies (7.4). Note first that \(E\cap V_{\sigma_{0}}\subseteq\tilde{A}\) and hence
\[\mathcal{L}^{\parallel}_{\tilde{A}}(\sigma_{0})-|\tilde{A}|\geq\mathcal{L}^{ \parallel}_{E\cap V_{\sigma_{0}}}(\sigma_{0})-|E\cap V_{\sigma_{0}}|,\quad \mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{0})\geq\mathcal{L}^{\perp}_{E\cap V_{ \sigma_{0}}}(\sigma_{0}).\]
Additionally, note that \(\mathcal{L}^{\parallel}_{\{u\}}(\sigma_{0})=1\) for every \(u\notin V_{\sigma_{0}}\) and \(\mathcal{L}^{\perp}_{\{u,v\}}(\sigma_{0})=0\) for every \(\{u,v\}\in E(\mathbb{Z}^{d})\) such that \(\{u,v\}\nsubseteq V_{\sigma_{0}}\) and hence,
\[\mathcal{L}^{\parallel}_{E\cap V_{\sigma_{0}}}(\sigma_{0})-|E\cap V_{\sigma_{0} }|=\mathcal{L}^{\parallel}_{E}(\sigma_{0})-|E|,\quad\mathcal{L}^{\perp}_{E\cap V _{\sigma_{0}}}(\sigma_{0})=\mathcal{L}^{\perp}_{E}(\sigma_{0}).\]
Finally, by (7.2) and (7.1),
\[\mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{0}) =|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon x,y\in\tilde{A} \times\mathbb{Z},\,(\sigma_{0})_{x}\neq(\sigma_{0})_{y}\}|\] \[=|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}\neq\sigma_{y}\}|\] \[\geq|\{\{x,y\}\in E^{\perp}(\mathbb{Z}^{d+1})\colon\{x,y\}\cap( \Lambda\times\mathbb{Z})\neq\emptyset,\,\sigma_{x}^{\prime}\neq\sigma_{y}^{ \prime}\}|=\operatorname{TV}(\tau).\qed\]
We now prove Lemmas 7.8 and 7.9.
Proof of Lemma 7.8.: If \(u\in\Lambda\setminus\tilde{A}\), then for every \(k\in\mathbb{Z}\),
\[\eta^{\tau}_{\{(u,k),(u,k+1)\}}=\eta_{\{(u,k+\tau(u)),(u,k+1+ \tau(u))\}},\] \[\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+\tau(u))},\quad\tilde{ \sigma}_{(u,k+1)}=(\sigma_{0})_{(u,k+1+\tau(u))},\]
and hence,
\[\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(u,k+1)\}}1_{(\sigma_{0})_{(u, k)}\neq(\sigma_{0})_{(u,k+1)}}= \sum_{k\in\mathbb{Z}}\eta_{\{(u,k+\tau(u)),(u,k+1+\tau(u))\}}1_{( \sigma_{0})_{(u,k+\tau(u))}\neq(\sigma_{0})_{(u,k+1+\tau(u))}}\] \[= \sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(u,k+1)\}}1_{\tilde{ \sigma}_{(u,k)}\neq\tilde{\sigma}_{(u,k+1)}}.\]
If \(u\in\tilde{A}\), then
\[\sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(u,k+1)\}}1_{\tilde{ \sigma}_{(u,k)}\neq\tilde{\sigma}_{(u,k+1)}}=\eta^{\tau}_{\{(u,0),(u,1)\}}= \eta_{\{(u,\tau(u)),(u,\tau(u)+1)\}}\]
and hence, since by the definition of \(\tau\), the configuration \(\sigma_{0}\) has a sign change at height \(\tau(u)\), i.e., \((\sigma_{0})_{(u,\tau(u))}\neq(\sigma_{0})_{(u,\tau(u)+1)}\),
\[\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(u,k+1)\}}1_{(\sigma_{0})_{(u, k)}\neq(\sigma_{0})_{(u,k+1)}}-\sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(u,k+1) \}}1_{\tilde{\sigma}_{(u,k)}\neq\tilde{\sigma}_{(u,k+1)}}\\ =\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(u,k+1)\}}1_{(\sigma_{0})_{ (u,k)}\neq(\sigma_{0})_{(u,k+1)}}-\eta_{\{(u,\tau(u)),(u,\tau(u)+1)\}}\geq \alpha^{\parallel}\left(\mathcal{L}^{\parallel}_{\{u\}}(\sigma_{0})-1\right).\]
The result follows.
Proof of Lemma 7.9.: For every \(\{u,v\}\in E(\mathbb{Z}^{d})\) such that \(\{u,v\}\cap\Lambda\neq\emptyset\) and \(\{u,v\}\nsubseteq\tilde{A}\), it holds that \(u,v\in B\cup\partial^{\operatorname{out}}B\) for some connected component of \(\mathbb{Z}^{d}\setminus\tilde{A}\) and hence \(\tau(u)=\tau(v)\), by Corollary 7.5. Hence, by (4.9), \(\eta^{\tau}_{\{(u,k),(v,k)\}}=\eta_{\{(u,k+s(u)),(v,k+s(u))\}}\) for every \(k\in\mathbb{Z}\). Additionally, with the aid of Observation 7.7 in case that \(|\{u,v\}\cap\tilde{A}|=1\), it holds that \(\tilde{\sigma}_{(u,k)}=(\sigma_{0})_{(u,k+\tau(u))}\) and \(\tilde{\sigma}_{(v,k)}=(\sigma_{0})_{(v,k+\tau(v))}=(\sigma_{0})_{(v,k+\tau(u))}\) for every \(k\in\mathbb{Z}\). Therefore,
\[\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(v,k)\}}1_{(\sigma_{0})_{(u,k)} \neq(\sigma_{0})_{(v,k)}}= \sum_{k\in\mathbb{Z}}\eta_{\{(u,k+\tau(u)),(v,k+\tau(u))\}}1_{( \sigma_{0})_{(u,k+\tau(u))}\neq(\sigma_{0})_{(v,k+\tau(u))}}\] \[= \sum_{k\in\mathbb{Z}}\eta^{\tau}_{\{(u,k),(v,k)\}}1_{\tilde{ \sigma}_{(u,k)}\neq\tilde{\sigma}_{(v,k)}}\]
Hence, since for every \(\{u,v\}\in E(\mathbb{Z}^{d})\) such that both \(u\) and \(v\) are in \(\tilde{A}\) it holds that \(\tilde{\sigma}_{(u,k)}=\tilde{\sigma}_{(v,k)}\) for every integer \(k\), it follows that
\[\mathcal{H}^{\eta,\Lambda,\perp}(\sigma_{0})-\mathcal{H}^{\eta^{ \tau},\Lambda,\perp}(\tilde{\sigma}) =2\sum_{\begin{subarray}{c}\{u,v\}\in E(\mathbb{Z}^{d})\\ u,v\in\tilde{A}\end{subarray}}\sum_{k\in\mathbb{Z}}\eta_{\{(u,k),(v,k)\}}1_{( \sigma_{0})_{(u,k)}\neq(\sigma_{0})_{(v,k)}}\] \[\geq 2\alpha^{\perp}\sum_{\begin{subarray}{c}\{u,v\}\in E( \mathbb{Z}^{d})\\ u,v\in\tilde{A}\end{subarray}}\sum_{k\in\mathbb{Z}}1_{(\sigma_{0})_{(u,k)} \neq(\sigma_{0})_{(v,k)}}=2\alpha^{\perp}\mathcal{L}_{\tilde{A}}^{\perp}( \sigma_{0}).\qed\]
### Bounding the trip entropy of \(\tau\)
This section is devoted to proving the following proposition.
**Proposition 7.10**.: _The shift \(\tau\) as defined above satisfies_
\[R(\tau)<R(E)+2(|E|-1)+\frac{16}{\min\{\alpha^{\parallel},\alpha^{\perp}\}d} \left(\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\mathrm{GE}^{\Lambda}(\eta^{\tau} )\right).\]
Denote \(\mathcal{L}:=\{u\in\mathbb{Z}^{d}\colon I_{\sigma_{0}}(u)=\text{``layered''}\}\) and \(\mathcal{D}:=\{\{u,v\}\in E(\mathbb{Z}^{d})\colon I_{\sigma_{0}}(u)\neq I_{ \sigma_{0}}(v)\}\).
**Observation 7.11**.: _It holds that \(\mathcal{L}\subseteq V_{\sigma_{0}}\) and for every \(\{u,v\}\in\mathcal{D}\), the vertices \(u\) and \(v\) are both in \(V_{\sigma_{0}}\) and moreover, belong to the same connected component of \(G_{\sigma_{0}}\)._
**Observation 7.12**.: _For every connected component \(A\) of the graph \(G_{\sigma_{0}}\) it holds that_
\[|A|\leq 2\left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{ \perp}(\sigma_{0})\right).\]
Proof.: Every \(u\in A\setminus\mathcal{L}\) has at least one neighbour \(v\) such that \(\{u,v\}\in\mathcal{D}\). Hence, by using Observation 7.11, it holds that
\[|A|\leq|\mathcal{L}\cap A|+2\left|\mathcal{D}\cap\binom{A}{2}\right|\leq\frac {1}{2}\left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|\right)+2\mathcal{L}_{ A}^{\perp}(\sigma_{0}).\qed\]
**Lemma 7.13**.: _Let \(A\) be a connected component of the graph \(G_{\sigma_{0}}\). There is a set \(S\subseteq A\) such that \(A\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\) and_
\[|S|<\frac{1}{d}\left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_ {A}^{\perp}(\sigma_{0})\right).\]
Proof.: We first show that for every \(a\in A\),
\[|\mathcal{L}\cap\mathcal{B}_{2}(a)\cap A|+\left|\mathcal{D}\cap\binom{ \mathcal{B}_{2}(a)\cap A}{2}\right|>d. \tag{7.5}\]
If \(I_{\sigma_{0}}(a)=\text{``layered''}\), then \(a\in\mathcal{L}\) and for every neighbour \(b\) of \(a\) it holds that \(b\in A\) and either \(b\in\mathcal{L}\) or \(\{a,b\}\in\mathcal{D}\). Hence,
\[|\mathcal{L}\cap\mathcal{B}_{1}(a)\cap A|+\left|\mathcal{D}\cap\binom{ \mathcal{B}_{1}(a)\cap A}{2}\right|\geq 2d+1.\]
If \(I_{\sigma_{0}}(a)\neq\text{``layered''}\), then since \(a\in V_{\sigma_{0}}\), it follows that there is a neighbour \(b\) of \(a\) such that \(I_{\sigma_{0}}(b)\neq I_{\sigma_{0}}(a)\). Note that \(b\in A\) and \(\{a,b\}\in\mathcal{D}\). With no loss of generality we may assume that \(b=a+e_{d}\). Then, for every \(v\in\{\pm e_{i}\}_{i=1}^{d-1}\) it holds that \(\{\{a,a+v\},\{b,b+v\},\{a+v,b+v\}\}\cap\mathcal{D}\neq\emptyset\). Hence, by using Observation 7.11,
\[\left|\mathcal{D}\cap\binom{\mathcal{B}_{2}(a)\cap A}{2}\right|\geq 2d-1.\]
Now, let \(S\) be a set of maximal cardinality in \(A\) such that the sets \(\{\mathcal{B}_{2}(a)\}_{a\in S}\) are mutually disjoint. The maximality of \(S\) implies that \(A\subseteq\bigcup_{a\in S}\mathcal{B}_{4}(a)\), and by (7.5),
\[|S| <\frac{1}{d}\sum_{a\in S}\left(|\mathcal{L}\cap\mathcal{B}_{2}(a) \cap A|+\left|\mathcal{D}\cap{\mathcal{B}_{2}(a)\cap A\choose 2}\right|\right)\] \[\leq\frac{1}{d}\left(|\mathcal{L}\cap A|+\left|\mathcal{D}\cap{A \choose 2}\right|\right)\leq\frac{1}{d}\left(\frac{1}{2}\left(\mathcal{L}_{A} ^{\parallel}(\sigma_{0})-|A|\right)+\mathcal{L}_{A}^{\perp}(\sigma_{0})\right).\qed\]
Proof of Proposition 7.10.: By Corollary 7.5, every level component of \(s\) intersects \(\tilde{A}\). Hence, by (6.18) and Lemma 7.13,
\[R(\tau)< R(E)+2(|E|-1)+\] \[\sum_{A\in\mathcal{A}}\left(2\mathrm{dist}(E,A)+\frac{20}{d} \left(\mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{\perp}( \sigma_{0})\right)\right)+8|\mathcal{LC}(\tau)|. \tag{7.6}\]
Let \(A\in\mathcal{A}\) and let \(v\in E\cap(A\cup\mathrm{in}(A))\). If \(v\in A\), then \(\mathrm{dist}(E,A)=0\). Otherwise, let \(B:=\mathcal{B}_{\mathrm{dist}(v,A)}(0)\cap\left(\mathbb{Z}^{d-1}\times\{0\}\right)\). For every \(u\in B\), since \(v\in\mathrm{in}(A)\), it holds that \(\{v+u+t\,e_{d}\colon t\in\mathbb{Z}\}\cap A\neq\emptyset\) and it follows that \(|B|\leq|A|\). Therefore, since the sequence \(\left(\frac{1}{d}{r+d-1}\right)\right)_{d=1}^{\infty}\) is non-decreasing for every positive integer \(r\),
\[\mathrm{dist}(v,A)\leq\frac{1}{3}{\mathrm{dist}(v,A)+2\choose 2}\leq\frac{1}{d}{ \mathrm{dist}(v,A)+d-1\choose d-1}=\frac{1}{d}|B\cap[0,\infty)^{d}|<\frac{1}{ d}|B|\leq\frac{1}{d}|A|\]
and hence, by Observation 7.12,
\[\mathrm{dist}(E,A)\leq\mathrm{dist}(v,A)<\frac{2}{d}\left(\mathcal{L}_{A}^{ \parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{0})\right).\]
Plugging this in (7.6) yields that
\[R(\tau) \leq R(E)+2(|E|-1)+\frac{24}{d}\sum_{A\in\mathcal{A}}\left( \mathcal{L}_{A}^{\parallel}(\sigma_{0})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{0} )\right)+8|\mathcal{LC}(\tau)|\] \[=R(E)+2(|E|-1)+\frac{24}{d}\left(\mathcal{L}_{\tilde{A}}^{ \parallel}(\sigma_{0})-|\tilde{A}|+\mathcal{L}_{\tilde{A}}^{\perp}(\sigma_{0} )\right)+8|\mathcal{LC}(\tau)|.\]
The result now follows by (7.3) and since
\[|\mathcal{LC}(\tau)|\leq\frac{1}{d}\operatorname{TV}(\tau)\leq\frac{1}{2 \alpha^{\perp}d}\left(\mathcal{H}^{\eta,\Lambda}(\sigma_{0})-\mathrm{GE}^{ \Lambda}(\eta^{\tau})\right).\]
by Observation 6.20 and (7.4).
### Proof of Proposition 6.29 and Lemma 4.4
Proof of Proposition 6.29.: Let \(\sigma_{0}:=\sigma^{\eta,\Lambda,\mathrm{Dob}}\) and consider the shift \(\tau\) as defined above. Note that
\[G^{\eta,\Lambda}(\tau)=\mathrm{GE}^{\Lambda}(\eta)-\mathrm{GE}^{\Lambda}(\eta ^{\tau})=\mathcal{H}^{\eta,\Lambda}\left(\sigma_{0}\right)-\mathrm{GE}^{ \Lambda}(\eta^{\tau}) \tag{7.7}\]
and the result follows by (7.4) and Proposition 7.10.
In the proof of Lemma 4.4 we will use the following lemma, which is an immediate consequence of [13, Lemma 1].
**Lemma 7.14**.: _For a positive integer \(\ell\). let \(\mathcal{G}_{\ell}\) be the graph on the vertex set \(E(\mathbb{Z}^{\ell})\), in which distinct \(e,\tilde{e}\in E(\mathbb{Z}^{\ell})\) are adjacent if_
\[e,\tilde{e}\in\{\{u,u+e_{i}\},\{u+e_{i},u+e_{i}+e_{j}\},\{u+e_{i}+e_{j}\},\{u+e_{ j},u\}\}\]
_for some \(u\in\mathbb{Z}^{\ell}\) and \(1\leq i<j\leq\ell\). If \(A\subseteq\mathbb{Z}^{\ell}\) is a connected set such that \(\mathbb{Z}^{\ell}\setminus A\) is connected as well, then the set of edges \(\{\{u,v\}\in E(\mathbb{Z}^{\ell})\colon(u,v)\in\partial A\}\) is connected in \(\mathcal{G}_{\ell}\)._
Proof of Lemma 4.4.: Let \(\sigma_{0}:=\sigma^{\eta,\Lambda,\operatorname{Dob}}\), let \(E=\{0\}\), and consider the shift \(\tau\) as defined above. By using (7.7), it follows from (7.4) that \(G^{\eta,\Lambda}(\tau)\geq 2\alpha^{\perp}\operatorname{TV}(\tau)\) and Proposition 7.10 yields that \(G^{\eta,\Lambda}(\tau)\geq\frac{1}{16}\min\{\alpha^{\parallel},\alpha^{ \perp}\}d\,R(\tau)\); hence, \(\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})\); moreover, by (7.3), \(G^{\eta,\Lambda}(\tau)\geq 2\alpha^{\perp}\mathcal{L}^{\perp}_{A}(\sigma_{0})\). Hence, it is left to show that \(\mathcal{L}^{\perp}_{A}(\sigma_{0})\geq|k|\).
With no loss of generality, assume that \(k>0\) and that \(k=\max\{h\in\mathbb{Z}\colon(\sigma_{0})_{(0,h)}=-1\}\).
Define \(\pi_{d+1}:\mathbb{Z}^{d}\times\mathbb{Z}\to\mathbb{Z}\) by \(\pi_{d+1}(u,h)=h\), and for every \(\{u,v\}\in E(\mathbb{Z}^{d+1})\), let \(\pi_{d+1}(\{u,v\})=\{\pi_{d+1}(u),\pi_{d+1}(v)\}\). Note that if \(e,\tilde{e}\in E(\mathbb{Z}^{d+1})\) are adjacent in the graph \(\mathcal{G}_{d+1}\) (defined above in Lemma 7.14), then \(\pi_{d+1}(e)\subseteq\pi_{d+1}(\tilde{e})\) or \(\pi_{d+1}(e)\supseteq\pi_{d+1}(\tilde{e})\).
Let \(X:=\{x\in\mathbb{Z}^{d+1}\colon\sigma_{0}(x)=1\}\). The set \(\{x\in X\colon\text{there is no path in $X$ from $x$ to $\infty$}\}\) is necessarily empty, otherwise we could flip all signs in this finite set to get a configuration in \(\Omega^{\Lambda,\operatorname{Dob}}\) with smaller \(H^{\eta}\). Hence, \(X\) is connected. Similarly, the set \(\mathbb{Z}^{d+1}\setminus X=\{y\in\mathbb{Z}^{d+1}\colon\sigma_{0}(y)=-1\}\) is connected as well. Hence, by Lemma 7.14, the set
\[\mathcal{I}(\sigma_{0})=\{\{x,y\}\in E(\mathbb{Z}^{d+1})\colon(x,y)\in \partial X\}=\{\{x,y\}\in E(\mathbb{Z}^{d+1})\colon\sigma_{0}(x)=1,\sigma_{0} (y)=-1\}\]
is connected in \(\mathcal{G}_{d+1}\).
We now argue similarly to the proof of Lemma 7.3. Recall that \(\mathcal{C}\) is the collection of all connected components of the graph \(G_{\sigma_{0}}\) and \(\mathcal{A}=\{A\in\mathcal{C}\colon 0\in A\cup\operatorname{in}(A)\}\), since \(E=\{0\}\). Since \(\sigma_{0}=\rho^{\operatorname{Dob}}\) at all but finitely many points of \(\mathbb{Z}^{d+1}\), and hence the set \(\bigcup_{A\in\mathcal{C}}(A\cup\operatorname{in}(A))\) is finite, there is \(w\in\mathbb{Z}^{d}\setminus\bigcup_{A\in\mathcal{C}}(A\cup\operatorname{in}(A))\) such that \((\sigma_{0})_{(w,k)}=\rho^{\operatorname{Dob}}_{(w,k)}\) for every integer \(k\).
Since the set \(\mathcal{I}(\sigma_{0})\) is connected in \(\mathcal{G}_{d+1}\), there is a sequence \((\tilde{e}_{i})_{i=0}^{N}\) of edges in \(\mathcal{I}(\sigma_{0})\) such that \(\tilde{e}_{i-1},\tilde{e}_{i}\) are adjacent in \(\mathcal{G}_{d+1}\) for every \(1\leq i\leq N\), \(\tilde{e}_{0}=\{(0,k+1),(0,k)\}\) and \(\tilde{e}_{N}=\{(w,1),(w,0)\}\). Let
\[I:=\left\{0\leq i\leq N\colon\tilde{e}_{i}\nsubseteq\bigcup_{A\in\mathcal{C} \setminus\mathcal{A}}(A\cup\operatorname{in}(A))\times\mathbb{Z}\right\}.\]
Since \(0\in I\) and \(N\in I\), the set \(\{0,1,\ldots,N\}\setminus I\) may be presented as \(\bigcup_{j=1}^{r}\{a_{j},a_{j}+1,\ldots,b_{j}-1\}\), where \(1\leq a_{1}<b_{1}<a_{2}<b_{2}<\cdots<a_{r}<b_{r}\leq N\).
Fix \(1\leq j\leq r\). Lemma 7.2 implies that there is \(A\in\mathcal{C}\setminus\mathcal{A}\) such that \(\tilde{e}_{i}\subseteq(A\cup\operatorname{in}(A))\times\mathbb{Z}\) for every \(a_{j}<i<b_{j}-1\). Then, \(\tilde{e}_{a_{j}-1}\) has at least one endpoint \((v,h)\) such that \(v\in\partial^{\operatorname{out}}(A\cup\operatorname{in}(A))=\partial_{ \operatorname{vis}(\infty)}A\); Observation 7.1 implies that \(h=I_{\sigma_{0}}(v)=\tilde{I}_{\sigma_{0}}(\infty;A)\) and therefore, \(\tilde{I}_{\sigma_{0}}(\infty;A)\in\pi_{d+1}(\tilde{e}_{a_{j}-1})\). Similarly, \(\tilde{I}_{\sigma_{0}}(\infty;A)\in\pi_{d+1}(\tilde{e}_{b_{j}})\) and hence \(\pi_{d+1}(\tilde{e}_{a_{j}-1})\cap\pi_{d+1}(\tilde{e}_{b_{j}})\neq\emptyset\).
Since \(\pi_{d+1}(\tilde{e}_{i-1})\subseteq\pi_{d+1}(\tilde{e}_{i})\) or \(\pi_{d+1}(\tilde{e}_{i-1})\supseteq\pi_{d+1}(\tilde{e}_{i})\) for every \(1\leq i\leq N\), \(\pi_{d+1}(\tilde{e}_{a_{j}-1})\cap\pi_{d+1}(\tilde{e}_{b_{j}})\neq\emptyset\) for every \(1\leq j\leq r\), \(\pi_{d+1}(\tilde{e}_{0})=\{k+1,k\}\) and \(\pi_{d+1}(\tilde{e}_{N})=\{1,0\}\), it follows that for every \(1\leq h\leq k\) there is \(i_{h}\in I\) such that \(\pi_{d+1}(\tilde{e}_{i_{h}})=\{h\}\). Then, for every \(1\leq h\leq k\) it holds that \(\tilde{e}_{i_{h}}\in\mathcal{I}(\sigma_{0})\cap E^{\perp}(\mathbb{Z}^{d+1})\) and \(\tilde{e}_{i_{h}}\cap(\tilde{A}\times\mathbb{Z})\neq\emptyset\) and hence,
\[\mathcal{L}^{\perp}_{\tilde{A}}(\sigma_{0})=|\{e\in\mathcal{I}(\sigma_{0})\cap E ^{\perp}(\mathbb{Z}^{d+1})\colon e\cap(\tilde{A}\times\mathbb{Z})\neq \emptyset\}|\geq k,\]
as desired.
## 8. Convergence of finite-volume ground configurations
In this section we prove Theorem 1.9, Corollary 1.10 and Theorem 1.11.
We assume throughout the section that we work with the anisotropic disordered ferromagnet in dimension \(D\geq 4\), with disorder distributions \(\nu^{\|}\) and \(\nu^{\perp}\) satisfying (1.11) and that condition (1.14) holds with a sufficiently small constant \(c>0\) so that both the assumptions of Theorem 1.7 and Theorem 4.3 hold.
We introduce the notation, for integer \(k\geq 0\),
\[\Lambda(k):=\{-k,\ldots,k\}^{d}. \tag{8.1}\]
### Proof of Theorem 1.9
The proof is based on the following deterministic lemma, which is proved using similar methods as those in Section 7.
**Lemma 8.1**.: _Let \(\eta\in\mathcal{D}(\alpha^{\|},\alpha^{\perp})\) for some \(\alpha^{\|},\alpha^{\perp}>0\). Let \(L_{1}>L_{0}\geq 0\) integer. Let \(\Lambda^{1},\Lambda^{2}\subset\mathbb{Z}^{d}\) be finite subsets containing \(\Lambda(L_{1})\). If_
\[\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z}} \not\equiv\sigma^{\eta,\Lambda^{2},\mathrm{Dob}}|_{\Lambda(L_{0})\times \mathbb{Z}} \tag{8.2}\]
_then for \(i=1\) or for \(i=2\) there exists \(\tau\in\mathcal{AS}^{\eta,\Lambda^{i}}(\alpha^{\|},\alpha^{\perp})\) with_
\[G^{\eta,\Lambda^{i}}(\tau)\geq\frac{\min\{\alpha^{\|},\alpha^{\perp}\}}{4}(L_{ 1}-L_{0})^{1-\frac{1}{d}}. \tag{8.3}\]
We postpone the proof of the lemma to Section 8.4. The following is an immediate consequence of the lemma and Theorem 4.3 (applied with \(\Lambda=\Lambda^{1}\) and with \(\Lambda=\Lambda^{2}\)).
**Corollary 8.2**.: _There exist constants \(C,c>0\) such that the following holds under the assumptions of Theorem 4.3 (for a sufficiently small \(c_{0}>0\)). Let \(L_{1}>L_{0}\geq 0\) integer. Let \(\Lambda^{1},\Lambda^{2}\subset\mathbb{Z}^{d}\) be finite subsets containing \(\Lambda(L_{1})\). Then_
\[\mathbb{P}\left(\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}|_{\Lambda (L_{0})\times\mathbb{Z}}\not\equiv\sigma^{\eta,\Lambda^{2},\mathrm{Dob}}|_{ \Lambda(L_{0})\times\mathbb{Z}}\right)\\ \leq C\exp\left(-\frac{c}{\kappa d^{2}}\left(\min\left\{\frac{ \alpha^{\|}}{\alpha^{\perp}},1\right\}\right)^{\frac{d-2}{d-1}}(L_{1}-L_{0})^{ \frac{d-2}{d}}\right). \tag{8.4}\]
We proceed to prove Theorem 1.9. It suffices to prove that for every sequence \((\Lambda_{n})\) of finite domains in \(\mathbb{Z}^{d}\), satisfying that \(\Lambda_{n}\supset\Lambda(n)\) for each \(n\), and for every \(v\in\mathbb{Z}^{d}\),
the restricted configuration \(\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) is the same for all large \(n\), almost surely. (8.5)
Indeed, if we then define
\[\text{for each }v\in\mathbb{Z}^{d},\,\sigma^{\eta,\mathrm{Dob}}|_{\{v\}\times \mathbb{Z}}\text{ is the eventual value of }\sigma^{\eta,\Lambda(n),\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\text{ as }n\to\infty \tag{8.6}\]
then we may conclude that the eventual value of \(\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) also equals \(\sigma^{\eta,\mathrm{Dob}}|_{\{v\}\times\mathbb{Z}}\) by applying (8.5) to the following two sequences of domains formed by interlacing subsequences of \((\Lambda_{n})\) and \((\Lambda(n))\): either taking \((\Lambda_{2n})\) in the even positions and \((\Lambda(2n+1))\) in the odd positions or taking \((\Lambda_{2n+1})\) in the odd positions and \((\Lambda(2n))\) in the even positions.
We proceed to prove (8.5), for some fixed sequence \((\Lambda_{n})\) of finite domains in \(\mathbb{Z}^{d}\), satisfying that \(\Lambda_{n}\supset\Lambda(n)\) for each \(n\). Let \(L_{0}\geq 0\) be an integer. Let \(E_{n}:=\{\sigma^{\eta,\Lambda_{n},\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z }}\not=\sigma^{\eta,\Lambda_{n+1},\mathrm{Dob}}|_{\Lambda(L_{0})\times\mathbb{Z }}\}\). By Corollary 8.2 (with \(L_{1}=n\)) we deduce that for every \(n>L_{0}\),
\[\mathbb{P}(E_{n})\leq C\exp\left(-c(\nu^{\|},\nu^{\perp},d)(n-L_{0})^{\frac{d- 2}{d}}\right) \tag{8.7}\]
with \(c(\nu^{\parallel},\nu^{\perp},d)>0\), depending only on \(\nu^{\parallel},\nu^{\perp}\) and the dimension \(d\) and some absolute constant \(C>0\). Thus \(\sum_{n}\mathbb{P}(E_{n})<\infty\). We conclude that only a finite number of the \(E_{n}\) hold, almost surely, implying that (8.5) holds for all \(v\in\Lambda(L_{0})\). This finishes the proof of (8.5) as \(L_{0}\) is arbitrary.
### Proof of Corollary 1.10
The probabilistic estimates (1.23) and (1.24) hold as a consequence of Theorem 1.7 (with \(\Lambda\) equal to some \(\Lambda(n)\)).
We proceed to define a \(G^{d}\)-invariant set \(\mathcal{C}_{0}\) of coupling fields satisfying that \(\mathbb{P}(\mathcal{C}_{0})=1\) and, on \(\mathcal{C}_{0}\), \(\sigma^{\eta,\mathrm{Dob}}\) is well defined and is \(G^{d}\) covariant.
For an automorpishm \(h\in G^{d}\) and a set \(\Lambda\subset\mathbb{Z}^{d}\) we define \(h(\Lambda)\) as the set in \(\mathbb{Z}^{d}\) satisfying that \(h(\Lambda)\times\mathbb{Z}=h(\Lambda\times\mathbb{Z})\) (such a set \(h(\Lambda)\) exists by the definition (1.9) of \(G^{d}\)).
Define
\[\mathcal{C}_{\mathrm{unique}}:=\bigcap_{g,h\in G^{d}}\{\eta\colon\forall n, \,\text{there is a unique ground configuration in }\Omega^{h(\Lambda(n))\times\mathbb{Z},\rho^{\mathrm{Dob}}}\text{ for }g(\eta)\}. \tag{8.8}\]
As \(G^{d}\) is countable and \(g(\eta)\) has the same distribution as \(\eta\) (since \(g\in G^{d}\)) we have that \(\mathbb{P}(\mathcal{C}_{\mathrm{unique}})=1\) by Lemma 1.5. It is clear from the definition that \(\mathcal{C}_{\mathrm{unique}}\) is \(G^{d}\) invariant. As before, the unique ground configuration (on \(\mathcal{C}_{\mathrm{unique}}\)) in \(\Omega^{h(\Lambda(n))\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) for \(g(\eta)\) is denoted \(\sigma^{g(\eta),h(\Lambda(n)),\mathrm{Dob}}\).
Now set
\[\mathcal{C}_{0}:=\left\{\eta\in\mathcal{C}_{\mathrm{unique}}\colon\begin{subarray}{ c}\text{for each }g\in G^{d},\text{ there exists a configuration }\sigma^{g(\eta),Dob}:\mathbb{Z}^{D}\to\{-1,1\}\text{ such that}\\ \lim_{n\to\infty}\sigma^{g(\eta),h(\Lambda(n)),\mathrm{Dob}}_{x}=\sigma^{g( \eta),\mathrm{Dob}}_{x}\text{ for all }h\in G^{d}\text{ and }x\in\mathbb{Z}^{D}\end{subarray} \right\}. \tag{8.9}\]
Then \(\mathbb{P}(\mathcal{C}_{0})=1\) by Theorem 1.9 (applied to the sequence \((h(\Lambda(n_{0}+n)))_{n}\) for \(n_{0}=n_{0}(h)\) large enough so that \(h(\Lambda(n_{0}+n))\supset\Lambda(n)\) for each \(n\)). It is again clear from the definition that \(\mathcal{C}_{0}\) is \(G^{d}\) invariant.
We proceed to check that \(\sigma^{\eta,\mathrm{Dob}}\) is \(G^{d}\) covariant on \(\mathcal{C}_{0}\). Note that, for \(\Lambda\subset\mathbb{Z}^{d}\) and an automorphism \(a\) of \(\mathbb{Z}^{D}\), the set of ground configurations in \(\Omega^{a(\Lambda)\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) for \(a(\eta)\) equals \(a\) applied to the set of ground configurations in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) for \(\eta\). In particular, if \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is well defined (i.e., uniqueness holds) then also \(\sigma^{a(\eta),a(\Lambda),\mathrm{Dob}}\) is well defined and we have
\[\sigma^{a(\eta),a(\Lambda),\mathrm{Dob}}_{x}=\sigma^{\eta,\Lambda,\mathrm{Dob }}_{ax}=a(\sigma^{\eta,\Lambda,\mathrm{Dob}})_{x},\quad x\in\mathbb{Z}^{D}. \tag{8.10}\]
Now let \(g\in G^{d}\) and \(\eta\in\mathcal{C}_{0}\). For each \(x\in\mathbb{Z}^{D}\),
\[g(\sigma^{\eta,\mathrm{Dob}})_{x}=\sigma^{\eta,\mathrm{Dob}}_{gx}=\lim_{n\to \infty}\sigma^{\eta,\Lambda(n),\mathrm{Dob}}_{gx}=\lim_{n\to\infty}\sigma^{g( \eta),g(\Lambda(n)),\mathrm{Dob}}_{x}=\sigma^{g(\eta),\mathrm{Dob}}_{x}, \tag{8.11}\]
where the second and last equality use the definition of \(\mathcal{C}_{0}\) and the third equality uses (8.10). Thus \(\sigma^{\eta,\mathrm{Dob}}\) is a \(G^{d}\)-covariant ground configuration defined on \(\mathcal{C}_{0}\).
It remains to define a \(G^{d}\) invariant set of coupling fields \(\mathcal{C}\subset\mathcal{C}_{0}\) with \(\mathbb{P}(\mathcal{C})=1\) such that \(\sigma^{\eta,\mathrm{Dob}}\) is a non-constant \(G^{d}\)-covariant ground configuration defined on \(\mathcal{C}\). Define
\[\mathcal{C}_{\mathrm{non-const}}:=\{\eta\colon\sigma^{\eta,\mathrm{Dob}}\text{ is not a constant configuration}\}, \tag{8.12}\]
\[\mathcal{C}_{\mathrm{ground}}:=\{\eta\colon\sigma^{\eta,\mathrm{Dob}}\text{ is a ground configuration for the coupling field }\eta\}. \tag{8.13}\]
Set
\[\mathcal{C}:=\mathcal{C}_{0}\cap\mathcal{C}_{\mathrm{non-const}}\cap\mathcal{C}_{ \mathrm{ground}}. \tag{8.14}\]
Then \(\mathbb{P}(\mathcal{C}_{\mathrm{non-const}})=1\) by the estimate (1.23), and \(\mathbb{P}(\mathcal{C}_{\mathrm{ground}})=1\) since \(\sigma^{\eta,\mathrm{Dob}}\) is the pointwise limit of \(\sigma^{\eta,\Lambda(n),\mathrm{Dob}}\) and each of these configurations is, almost surely, a ground configuration
in \(\Omega^{\Lambda(n)\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) by Lemma 1.5. The set \(\mathcal{C}\) is \(G^{d}\) invariant since the covariance identity (8.11) holds on \(\mathcal{C}_{0}\) (and it maps ground configurations for \(\eta\) to ground configurations for \(g(\eta)\)). This finishes the proof.
### Proof of Theorem 1.11
The estimate (1.26) is a direct consequence of Corollary 8.2, applied with \(\Lambda^{1}=\Lambda(n)\) and \(\Lambda^{2}=\Lambda\), by taking \(n\) to infinity and applying the convergence result of Theorem 1.9.
We proceed to establish (1.27). Let \(k:=\lfloor\frac{\|u-v\|_{\infty}}{2}\rfloor\), so that \(u+\Lambda(k)\) is disjoint from \(v+\Lambda(k)\). By (1.26) (after a translation by \(u\) and by \(v\)),
\[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}|_{(u+\Lambda(L))\times \mathbb{Z}}\not\equiv\sigma^{\eta,u+\Lambda(k),\mathrm{Dob}}|_{(u+\Lambda(L)) \times\mathbb{Z}}\right) \leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(k-L\right) ^{\frac{d-2}{d}}\right),\] \[\mathbb{P}\left(\sigma^{\eta,\mathrm{Dob}}|_{(v+\Lambda(L)) \times\mathbb{Z}}\not\equiv\sigma^{\eta,v+\Lambda(k),\mathrm{Dob}}|_{(v+ \Lambda(L))\times\mathbb{Z}}\right) \leq C\exp\left(-c(\nu^{\parallel},\nu^{\perp},d)\left(k-L\right) ^{\frac{d-2}{d}}\right).\]
The estimate (1.27) follows (perhaps with a smaller \(c>0\)) as \(\sigma^{\eta,u+\Lambda(k),\mathrm{Dob}}\) and \(\sigma^{\eta,v+\Lambda(k),\mathrm{Dob}}\) are independent (as they are functions of disjoint subsets of the disorder).
Lastly, the fact that \((\eta,\sigma^{\eta,\mathrm{Dob}})\) is \(G^{d}\)-invariant is a rephrasing of the fact that \(\sigma^{\eta,\mathrm{Dob}}\) is \(G^{d}\)-covariant (proved in Corollary 1.10). The fact that it has a trivial \(\mathbb{Z}^{d}\)-tail sigma algebra is a consequence of (1.26), since for each finite \(\Lambda\subset\mathbb{Z}^{d}\), \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is a function of \((\eta_{e})\) for the edges \(e\) above \(\Lambda\), and hence \(\sigma^{\eta,\Lambda,\mathrm{Dob}}\) is independent of the \(\mathbb{Z}^{d}\)-tail sigma algebra of \((\eta,\sigma^{\eta,\mathrm{Dob}})\). It is standard that an invariant and tail-trivial process is ergodic.
### Proof of Lemma 8.1
For a configuration \(\sigma:\mathbb{Z}^{d+1}\to\{-1,1\}\), define the graph \(G_{\sigma}\) and the function \(I_{\sigma}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\) as in Section 7.1.
**Lemma 8.3**.: _Let \(\sigma:\mathbb{Z}^{d+1}\to\{-1,1\}\) be a configuration such that \(\sigma=\rho^{\mathrm{Dob}}\) at all but finitely many points of \(\mathbb{Z}^{d+1}\) and let \(u_{0}\in\mathbb{Z}^{d}\) such that there exists \(k_{0}\in\mathbb{Z}\) for which \(\sigma_{(u_{0},k_{0})}\neq\rho^{\mathrm{Dob}}_{(u_{0},k_{0})}\). Then, there exists a connected component \(A\) of \(G_{\sigma}\) such that \(u_{0}\in A\cup\mathrm{in}(A)\)._
Proof.: Let \(U_{0}\) be the connected component of the set \(\{u\in\mathbb{Z}^{d}\colon\sigma_{(u,k_{0})}=\sigma_{(u_{0},k_{0})}\}\) containing \(u_{0}\), let \(U:=U_{0}\cup\mathrm{in}(U_{0})\) and let \(A_{0}:=\partial^{\mathrm{in}}U\cup\partial^{\mathrm{out}}U\).
Recall the definition of the graph \(\mathcal{G}_{d}\) from Lemma 7.14. Obviously, \(U\) and \(\mathbb{Z}^{d}\setminus U\) are both connected. Hence, by Lemma 7.14, the set \(\{\{u,v\}\in E(\mathbb{Z}^{d})\colon(u,v)\in\partial U\}\) is connected in \(\mathcal{G}_{d}\) and it readily follows that the set \(A_{0}\) is connected.
Clearly, \(\partial^{\mathrm{in}}U\subseteq\partial^{\mathrm{in}}U_{0}\subseteq V_{\sigma}\) and \(\partial^{\mathrm{out}}U\subseteq\partial^{\mathrm{out}}U_{0}\subseteq V_{\sigma}\), and hence \(A_{0}\subseteq V_{\sigma}\). Let \(A\) be the connected component of \(G_{\sigma}\) such that \(A_{0}\subseteq A\).
The set \(\{u\in\mathbb{Z}^{d}\colon\sigma_{(u,k_{0})}=\sigma_{(u_{0},k_{0})}\}\subseteq\{ u\in\mathbb{Z}^{d}\colon\sigma_{(u,k_{0})}\neq\rho^{\mathrm{Dob}}_{(u,k_{0})}\}\) is finite, hence \(U_{0}\) and \(U\) are finite as well. Therefore, since \(u_{0}\in U_{0}\subseteq U\), it follows that every path from \(u_{0}\) to \(\infty\) intersects the set \(\partial^{\mathrm{in}}U\subseteq A_{0}\subseteq A\) (as well as the set \(\partial^{\mathrm{out}}U\subseteq A_{0}\subseteq A\)) and hence \(u_{0}\in A\cup\mathrm{in}(A)\).
**Lemma 8.4**.: _Under the assumptions of Lemma 8.1, there exists a path \(u_{1},\ldots,u_{n}\) of points in \(\mathbb{Z}^{d}\) starting in \(u_{1}\in\partial^{\mathrm{out}}\Lambda(L_{0})\) and ending in \(u_{n}\in\partial^{\mathrm{in}}\Lambda(L_{1})\) such that \(u_{j-1}\sim u_{j}\) for every \(1<j\leq n\), and for every \(1\leq j\leq n\) there is an integer \(k\) such that \(\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}_{(u_{j},k)}\neq\sigma^{\eta,\Lambda^{2}, \mathrm{Dob}}_{(u_{j},k)}\)._
Proof.: Let
\[U:=\{u\in\Lambda(L_{1})\colon\forall k\in\mathbb{Z}\text{ it holds that }\sigma^{\eta,\Lambda^{1},\mathrm{Dob}}_{(u_{j},k)}=\sigma^{\eta,\Lambda^{2}, \mathrm{Dob}}_{(u_{j},k)}\}.\]
By way of contradiction, assume that \(\Lambda(L_{0})\subseteq U\cup\operatorname{in}(U)\). Consider the configurations \(\tilde{\sigma}_{1},\tilde{\sigma}_{2}\colon\mathbb{Z}^{d+1}\to\{-1,1\}\) defined as follows. For every \(u\in\mathbb{Z}^{d}\) and \(k\in\mathbb{Z}^{d}\),
\[(\tilde{\sigma}_{1})_{(u,k)}=\begin{cases}\sigma^{\eta,\Lambda^{2},\operatorname {Dob}}_{(u,k)}&u\in\operatorname{in}(U),\\ \sigma^{\eta,\Lambda^{1},\operatorname{Dob}}_{(u,k)}&\text{otherwise},\end{cases} \qquad\qquad(\tilde{\sigma}_{2})_{(u,k)}=\begin{cases}\sigma^{\eta,\Lambda^{1},\operatorname{Dob}}_{(u,k)}&u\in\operatorname{in}(U),\\ \sigma^{\eta,\Lambda^{2},\operatorname{Dob}}_{(u,k)}&\text{otherwise}.\end{cases}\]
For any \(i\in\{1,2\}\), clearly \(\tilde{\sigma_{i}}\in\Omega^{\Lambda^{i},\operatorname{Dob}}\) and hence \(\mathcal{H}^{\eta,\Lambda^{i}}(\tilde{\sigma_{i}})\geq\mathcal{H}^{\eta, \Lambda^{i}}(\sigma^{\eta,\Lambda^{i},\operatorname{Dob}})\). Therefore, since it is easy to see that
\[\mathcal{H}^{\eta,\Lambda^{1}}(\tilde{\sigma_{1}})+\mathcal{H}^{\eta,\Lambda^ {2}}(\tilde{\sigma_{2}})=\mathcal{H}^{\eta,\Lambda^{1}}(\sigma^{\eta,\Lambda^{ 1},\operatorname{Dob}})+\mathcal{H}^{\eta,\Lambda^{2}}(\sigma^{\eta,\Lambda^{ 2},\operatorname{Dob}}),\]
it follows that \(\mathcal{H}^{\eta,\Lambda^{1}}(\tilde{\sigma_{1}})=\mathcal{H}^{\eta,\Lambda^ {1}}(\sigma^{\eta,\Lambda^{1},\operatorname{Dob}})\) (as well as \(\mathcal{H}^{\eta,\Lambda^{2}}(\tilde{\sigma_{2}})\geq\mathcal{H}^{\eta, \Lambda^{2}}(\sigma^{\eta,\Lambda^{2},\operatorname{Dob}})\)), in contradiction to the uniqueness of \(\sigma^{\eta,\Lambda^{1},\operatorname{Dob}}\).
Hence, \(\Lambda(L_{0})\nsubseteq U\cup\operatorname{in}(U)\) and the claim follows.
We proceed to prove Lemma 8.1. For any \(i\in\{1,2\}\) denote, for brevity, \(\sigma_{i}:=\sigma^{\eta,\Lambda^{i},\operatorname{Dob}}\) and let \(\mathcal{C}_{i}\) be the collection of all connected components of \(G_{\sigma_{i}}\). For the path \(u_{1},\ldots,u_{n}\) guaranteed by Lemma 8.4, note that for every \(1\leq j\leq n\) it holds that \((\sigma_{1})_{(u_{j},k)}\neq\rho^{\operatorname{Dob}}_{(u_{j},k)}\) or \((\sigma_{2})_{(u_{j},k)}\neq\rho^{\operatorname{Dob}}_{(u_{j},k)}\) and hence, by Lemma 8.3, there is \(A\in\mathcal{C}_{1}\cup\mathcal{C}_{2}\) such that \(u_{j}\in A\cup\operatorname{in}(A)\). Let \(\mathcal{M}\subseteq\mathcal{C}_{1}\cup\mathcal{C}_{2}\) be a minimal collection (with respect to inclusion) such that for every \(1\leq j\leq n\) there is \(A\in\mathcal{M}\) such that \(u_{j}\in A\cup\operatorname{in}(A)\). For any \(i\in\{1,2\}\), let \(A^{(i)}:=\bigcup_{A\in\mathcal{M}\cap\mathcal{C}_{i}}A\) and
\[B^{(i)}_{\infty}:=\{u\in\mathbb{Z}^{d}:\text{there is a path from $u$ to $\infty$ that does not intersect $A^{(i)}$}\}.\]
Consider \(v\in\mathbb{Z}^{d}\setminus(A^{(i)}\cup B^{(i)}_{\infty})\) for \(i\in\{1,2\}\). Clearly, there is \(A\in\mathcal{M}\cap\mathcal{C}_{i}\) such that \(v\in\operatorname{in}(A)\), and this \(A\) is unique by the second and third parts of Lemma 7.2 and the minimality of \(\mathcal{M}\). Let \(\tilde{\tau}_{i}(v):=\tilde{I}_{\sigma_{i}}(v;A)\) (see Observation 7.1). We complete the definition of a "pre-shift" \(\tilde{\tau}_{i}\colon\mathbb{Z}^{d}\to\mathbb{Z}\cup\{\text{``layered''}\}\) by setting \(\tilde{\tau}_{i}(v)=I_{\sigma_{i}}(v)\) for \(v\in A^{(i)}\) and \(\tilde{\tau}_{i}(v)=0\) for \(v\in B^{(i)}_{\infty}\).
We turn the "pre-shift" \(\tilde{\tau}_{i}\) for \(i\in\{1,2\}\) into a shift \(\tau_{i}\colon\mathbb{Z}^{d}\to\mathbb{Z}\) exactly as done in Section 7.2. The arguments presented in the proof of Proposition 7.6 imply, by using (7.7), that for any \(i\in\{1,2\}\),
\[G^{\eta,\Lambda^{i}}(\tau_{i})\geq 2\alpha^{\|}\left(\mathcal{L}^{\|}_{A^{(i)}}( \sigma_{i})-|A^{(i)}|\right)+2\alpha^{\perp}\mathcal{L}^{\perp}_{A^{(i)}}( \sigma_{i}) \tag{8.15}\]
and consequently,
\[G^{\eta,\Lambda^{i}}(\tau_{i})\geq 2\alpha^{\perp}\operatorname{TV}(\tau_{i}). \tag{8.16}\]
**Lemma 8.5**.: _For any \(i\in\{1,2\}\),_
\[R(\tau_{i})<\frac{24}{\min\{\alpha^{\|},\alpha^{\perp}\}d}\max\left\{G^{\eta, \Lambda^{1}}(\tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2})\right\}.\]
Proof.: We first show that the set \(A^{(1)}\cup A^{(2)}=\bigcup_{A\in\mathcal{M}}A\) is connected. By way of contradiction, assume that there is a set \(\emptyset\neq\Gamma\subsetneq\bigcup_{A\in\mathcal{M}}A\) such that \(\operatorname{dist}(\Gamma,\left(\bigcup_{A\in\mathcal{M}}A\right)\setminus \Gamma)>1\). Since every \(A\in\mathcal{M}\) is connected, there is necessarily \(\emptyset\neq\mathcal{M}_{0}\subsetneq\mathcal{M}\) such that \(\Gamma=\bigcup_{A\in\mathcal{M}_{0}}A\) and \(\left(\bigcup_{A\in\mathcal{M}}A\right)\setminus\Gamma=\bigcup_{A\in \mathcal{M}\setminus\mathcal{M}_{0}}A\). Note that it follows that \(\operatorname{dist}(A,A^{\prime})>1\) for every \(A\in\mathcal{M}_{0}\) and \(A^{\prime}\in\mathcal{M}\setminus\mathcal{M}_{0}\). The minimality of \(\mathcal{M}\) implies that neither \(\{u_{j}\}_{j=1}^{n}\subseteq\bigcup_{A\in\mathcal{M}_{0}}(A\cup\operatorname{ in}(A))\) nor \(\{u_{j}\}_{j=1}^{n}\subseteq\bigcup_{A\in\mathcal{M}\setminus\mathcal{M}_{0}}(A\cup \operatorname{in}(A))\). Since \(\{u_{j}\}_{j=1}^{n}\subseteq\bigcup_{A\in\mathcal{M}}(A\cup\operatorname{in}( A))\), it follows that there are \(1\leq j,j^{\prime}\leq n\), \(A\in\mathcal{M}_{0}\) and \(A^{\prime}\in\mathcal{M}\setminus\mathcal{M}_{0}\) such that \(u_{j}\in(A\cup\operatorname{in}(A))\), \(u_{j^{\prime}}\in\operatorname{in}(A^{\prime})\cup A^{\prime}\) and \(|j-j^{\prime}|=1\), therefore \(\|u_{j}-u_{j^{\prime}}\|_{1}=1\) and hence \(\operatorname{dist}(A\cup\operatorname{in}(A),A^{\prime}\cup\operatorname{in}(A^{ \prime}))\leq 1\). Lemma
7.2 implies that either \(A\cup\operatorname{in}(A)\subsetneq A^{\prime}\cup\operatorname{in}(A^{\prime})\) or \(A^{\prime}\cup\operatorname{in}(A^{\prime})\subsetneq A\cup\operatorname{in}(A)\), contradicting the minimality of \(\mathcal{M}\).
Lemma 7.13 implies that for any \(i\in\{1,2\}\) there is a set \(S_{i}\subseteq A^{(i)}\) such that \(A^{(i)}\subseteq\bigcup_{a\in S_{i}}\mathcal{B}_{4}(a)\) and, by using (8.15),
\[|S_{i}| <\sum_{A\in\mathcal{M}\cap\mathcal{C}_{i}}\frac{1}{d}\left( \mathcal{L}_{A}^{\|}(\sigma_{i})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{i})\right)\] \[=\frac{1}{d}\left(\mathcal{L}_{A^{(i)}}^{\|}(\sigma_{i})-|A^{(i) }|+\mathcal{L}_{A^{(i)}}^{\perp}(\sigma_{i})\right)<\frac{1}{2\min\{\alpha^{ \|},\alpha^{\perp}\}d}G^{\eta,\Lambda^{i}}(\tau_{i}).\]
For any \(i\in\{1,2\}\), the definition of \(\tau_{i}\) implies, as in Sections 7.1 and 7.2, that every level component of \(\tau_{i}\) intersects \(A^{(i)}\), and therefore intersects \(A^{(1)}\cup A^{(2)}\).
Hence, by (6.18), since \(A^{(1)}\cup A^{(2)}\subseteq\bigcup_{a\in S_{1}\cup S_{2}}\mathcal{B}_{4}(a)\),
\[R(\tau_{i})<20|S_{1}\cup S_{2}|+8|\mathcal{LC}(\tau_{i})|<\frac{10}{\min\{ \alpha^{\|},\alpha^{\perp}\}d}\left(G^{\eta,\Lambda^{1}}(\tau_{1})+G^{\eta, \Lambda^{2}}(\tau_{2})\right)+8|\mathcal{LC}(\tau_{i})|\]
and the result follows since
\[|\mathcal{LC}(\tau_{i})|\leq\frac{1}{d}\operatorname{TV}(\tau_{i})\leq\frac{ 1}{2\alpha^{\perp}d}G^{\eta,\Lambda^{i}}(\tau_{i}).\]
by Observation 6.20 and (8.16).
**Lemma 8.6**.: _It holds that_
\[\max\left\{G^{\eta,\Lambda^{1}}(\tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2}) \right\}\geq\frac{\min\{\alpha^{\|},\,\alpha^{\perp}\}}{4}(L_{1}-L_{0})^{1- \frac{1}{d}}.\]
Proof.: For every finite set \(A\subset\mathbb{Z}^{d}\), by (6.10),
\[|A|\geq\frac{1}{2d}|\partial(\operatorname{in}(A))|\geq|\operatorname{in}(A)| ^{1-\frac{1}{d}}\]
and therefore
\[2|A|\geq|\operatorname{in}(A)|^{1-\frac{1}{d}}+|A|\geq|\operatorname{in}(A)| ^{1-\frac{1}{d}}+|A|^{1-\frac{1}{d}}\geq(|\operatorname{in}(A)|+|A|)^{1-\frac {1}{d}}=|\operatorname{in}(A)\cup A|^{1-\frac{1}{d}}.\]
Hence, for any \(i\in\{1,2\}\), by (8.15) and Observation 7.12,
\[\frac{2}{\min\{\alpha^{\|},\,\alpha^{\perp}\}}G^{\eta,\Lambda^{i} }(\tau_{i}) \geq 4\left(\mathcal{L}_{A^{(i)}}^{\|}(\sigma_{i})-|A^{(i)}|+ \mathcal{L}_{A^{(i)}}^{\perp}(\sigma_{i})\right)\] \[=\sum_{A\in\mathcal{M}\cap\mathcal{C}_{i}}4\left(\mathcal{L}_{A}^ {\|}(\sigma_{i})-|A|+\mathcal{L}_{A}^{\perp}(\sigma_{i})\right)\] \[\geq\sum_{A\in\mathcal{M}\cap\mathcal{C}_{i}}2|A|\geq\sum_{A\in \mathcal{M}\cap\mathcal{C}_{i}}|\operatorname{in}(A)\cup A|^{1-\frac{1}{d}}.\]
Therefore,
\[\frac{4}{\min\{\alpha^{\|},\,\alpha^{\perp}\}}\max\left\{G^{\eta,\Lambda^{1}}( \tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2})\right\}\geq\frac{2}{\min\{\alpha^{\|},\,\alpha^{\perp}\}}\left(G^{\eta,\Lambda^{1}}(\tau_{1})+G^{\eta,\Lambda^{2}}( \tau_{2})\right)\]
\[\geq\sum_{A\in\mathcal{M}}|\operatorname{in}(A)\cup A|^{1-\frac{1}{d}}\geq \left(\sum_{A\in\mathcal{M}}|\operatorname{in}(A)\cup A|\right)^{1-\frac{1}{d}}\]
and the result follows since \(\{u_{1},\ldots,u_{n}\}\subseteq\bigcup_{A\in\mathcal{M}}\left(A\cup\mathrm{in}(A)\right)\), and hence
\[\sum_{A\in\mathcal{M}}\left|\mathrm{in}(A)\cup A\right|\geq\Big{|}\bigcup_{A\in \mathcal{M}}\left(A\cup\mathrm{in}(A)\right)\Big{|}\geq\left|\{u_{1},\ldots,u_ {n}\}\right|\geq L_{1}-L_{0}.\qed\]
To conclude the proof of Lemma 8.1, take \(i\in\{1,2\}\) for which \(\max\{G^{\eta,\Lambda^{1}}(\tau_{1}),G^{\eta,\Lambda^{2}}(\tau_{2})\}=G^{\eta,\Lambda^{i}}(\tau_{i})\). Then \(\tau_{i}\) is admissible by (8.16) and Lemma 8.5, and
\[G^{\eta,\Lambda^{i}}(\tau_{i})\geq\frac{\min\{\alpha^{\parallel},\,\alpha^{ \perp}\}}{4}(L_{1}-L_{0})^{1-\frac{1}{d}}\]
by Lemma 8.6.
## 9. Discussion and open problems
### Localization of Dobrushin interface, roughening transitions and non-constant ground configurations
Question 1.6 asks whether the interface formed under Dobrushin boundary conditions remains localized uniformly in the volume. This question and its variants have received significant attention in the physics literature. As an approximation to the true interface (reminiscent of the disordered SOS model (1.28) but involving further approximations), it is suggested to study the ground configurations with zero boundary values of a _real-valued_ height function \(\varphi:\mathbb{Z}^{d}\to\mathbb{R}\) whose energy is given by the formal "disordered Gaussian free field (GFF)" Hamiltonian
\[H^{\mathrm{GFF},\zeta}(\varphi):=\sum_{\{u,v\}\in E(\mathbb{Z}^{d})}\left( \varphi_{u}-\varphi_{v}\right)^{2}+\sum_{v\in\mathbb{Z}^{d}}\zeta_{v,\varphi_ {v}} \tag{9.1}\]
where \(\zeta:\mathbb{Z}^{d}\times\mathbb{R}\to\mathbb{R}\) is an environment describing the quenched disorder, which is chosen with \((\zeta_{v,\cdot})_{v}\) independent and \(t\mapsto\zeta_{v,t}\) having short-range correlations for each \(v\) (and possibly also light tails). It is predicted that this height function delocalizes with power-law fluctuations in dimensions \(d=1,2,3\), delocalizes with sub-power-law fluctuations in dimension \(d=4\) and remains localized in dimensions \(d\geq 5\). These predictions are put on a rigorous footing in the forthcoming work [4]. More precisely, it is predicted that on the cube \(\{-L,\ldots,L\}^{d}\), the height function fluctuates to height \(L^{2/3}\) in dimension \(d=1\)[11, 12, 13], to height \(L^{0.41\ldots}\) in dimension \(d=2\), to height \(L^{0.22\ldots}\) in dimension \(d=3\)[13, 14] and to height \((\log L)^{0.41\ldots}\) in dimension \(d=4\)[13].
It is predicted, however, that the model (9.1) may display different behavior when restricted to _integer-valued_ height functions. Specifically, while it is believed that the ground configurations with zero boundary values are still delocalized in dimensions \(d=1,2\), with the same power laws as the real-valued versions, and still localized in dimensions \(d\geq 5\), a change of behavior occurs for \(d=3,4\)[11, 13, 12, 14, 15]. In dimension \(d=3\) a _roughening transition_ takes place in the disorder concentration: the height function is localized for sufficiently concentrated disorder and delocalized otherwise, having logarithmic fluctuations at the critical disorder concentration and power-law fluctuations, of the same order as the real-valued version, for less concentrated disorder [13]. In contrast, it is indicated that no such transition takes place in dimension \(d=4\), where the height function is _localized_ at all disorder concentrations [13]. These predictions are also believed to govern the fluctuations of the disordered SOS model (1.28), and the Dobrushin interface of the disordered ferromagnet on \(\mathbb{Z}^{D}\), with our standard substitution \(D=d+1\). Our work justifies the fact that the Dobrushin
interface of the disordered ferromagnet is localized in dimensions \(d\geq 3\) for sufficiently concentrated disorder (the analogous fact for the disordered SOS model is established in [1]). It would be very interesting to extend it to establish the predicted roughening transition in dimension \(d=3\) and the predicted localization for all disorder concentrations in dimensions \(d\geq 4\). It would also be interesting to prove delocalization in dimension \(d=2\) (and especially to prove power-law delocalization). We expect the methods of Aizenman-Wehr [1], or their quantitative extension in [10], to be relevant in dimension \(d=2\), as was the case for the disordered SOS model [1]. Power-law delocalization in dimension \(d=1\) is proved by Licea-Newman-Piza [13] (see also [14, 15]).
Related to the above, we mention that a version of the model (9.1) in which the disorder is _linear_, i.e., \(\zeta_{v,\varphi_{v}}=\bar{\zeta}_{v}\varphi_{v}\) with the \((\bar{\zeta}_{v})\) independently sampled from the Gaussian dsitribution \(N(0,\lambda^{2})\), is studied in [10]. The (real-valued) model is exactly solvable and behaves similarly to (9.1) in the sense that it also exhibits power-law delocalization in dimensions \(d=1,2,3\), square-root logarithmic delocalization when \(d=4\) and is localized when \(d\geq 5\). It is conjectured in [10] that the _integer-valued_ version of this model should also exhibit a roughening transition: the model should transition from a localized to a delocalized regime as \(\lambda\) increases in dimension \(d=3\) (whether this also occurs for \(d=4\) is unclear). The localization part of the transition is established in [10].
Lastly, Question 1.1 asks whether the disordered ferromagnet admits non-constant ground configurations. This is certainly the case whenever the Dobrushin interface is localized, as in this work. However, it may still be the case even when the Dobrushin interface is delocalized, as it may be that other boundary conditions on \(\{-L,\ldots,L\}^{D}\) (possibly depending on the disorder \(\eta\)) may lead to an interface passing near the origin. The fact that the predicted roughening exponent is relatively small already for \(d=2\) (the prediction there is \(\approx 0.41\)), together with the fact that there are more possibilities for the boundary conditions as \(d\) grows leads us to believe that non-constant ground configurations will indeed exist for all \(d\geq 2\) (see [1, Section 4.5.1] for a related heuristic of Newman for the \(d=1\) case).
### Positive temperature and the random-field Ising model
Our main result (Theorem 1.2) states that the disordered ferromagnet admits non-constant ground configurations in dimension \(D\geq 4\) when the coupling constants are sampled independently from a sufficiently concentrated distribution. This is achieved by proving that the interface formed under Dobrushin boundary conditions remains localized uniformly in the volume (Theorem 1.7). It is natural to ask for an extension of these results to the disordered ferromagnet at low, positive temperatures (instead of asking about non-constant ground configurations, one may ask whether there exist Gibbs states other than mixtures of the infinite-volume limits under constant boundary conditions). Such an extension is established for the disordered SOS model (1.28) by Bovier-Kulke [1] and we believe that it holds also for the disordered ferromagnet. We also expect our methods to be relevant to the proof of such a result, though one obstacle which one will need to overcome is the existence of _bubbles_ in the Ising configuration: finite islands of one sign completely surrounded by spins of the other sign. Such bubbles occur at any non-zero temperature. Additional tools from the work of Dobrushin [13], such as the use of cluster expansions and the notion of "groups of walls" may be of help here too.
Another model of interest is the _random-field_ Ising model (1.29). It is natural to wonder whether our results (and their possible low-temperature extensions) hold also for the Dobrushin interface in the random-field Ising model. On the level of a disordered SOS approximation to the Dobrushin interface, this is stated to be true, in dimensions \(D\geq 4\) and for
sufficiently weak disorder, by Bovier-Kulske [10], following an earlier analysis of a hierarchical version [10]. We again believe that our methods will be relevant to the random-field Ising model case, but point out that an extra complication arising in this case compared to the disordered ferromagnet is that bubbles appear already at zero temperature.
### The set of non-constant covariant ground configurations
Theorem 1.3 shows the existence of a non-constant \(G^{D-1}\)-covariant ground configuration. Additional configurations with these properties may be obtained from a given one by the following recipe: Suppose \(\eta\mapsto\sigma(\eta)\) is a non-constant \(G^{D-1}\)-covariant ground configuration. For each integer \(k\), define a configuration \(\eta\mapsto\sigma^{k}(\eta)\) by the relation
\[T^{k}(\sigma^{k}(\eta)):=\sigma(T^{k}(\eta)) \tag{9.2}\]
where \(T^{k}\) is the automorphism of \(\mathbb{Z}^{D}\) given by \(T^{k}(x):=x+ke_{D}\) (with \(e_{D}\) being the last coordinate vector) and the action of automorphisms on coupling fields and configurations is given by (1.8). It is straightforward that \(\eta\mapsto\sigma^{k}(\eta)\) is then also a non-constant \(G^{D-1}\)-covariant ground configuration.
Suppose the coupling constants \((\eta_{\{x,y\}})\) are sampled independently from a disorder distribution which is non-atomic and has finite mean. We claim that the mappings \((\eta\mapsto\sigma^{k}(\eta))_{k}\) are all distinct (even modulo zero probability events). Indeed, to obtain a contradiction, suppose that \(\sigma^{k+m}=\sigma^{k}\) almost surely, for some integers \(k\) and \(m\neq 0\). Then also \(\sigma^{m}=\sigma\) almost surely. But this implies that \(\eta\mapsto\sigma(\eta)\) is a \(\mathbb{Z}^{D,m}\)-covariant ground configuration, where \(\mathbb{Z}^{D,m}\) is the group of translations by vectors of the form \(x=(x_{1},\ldots,x_{D})\in\mathbb{Z}^{D}\) with \(x_{D}\) divisible by \(m\). Recall that Wehr-Wasielak [21] prove that there are no non-constant \(\mathbb{Z}^{D}\)-covariant ground configurations (under the above assumptions on \(\eta\)). A minor extension of their proof also rules out non-constant \(\mathbb{Z}^{D,m}\)-covariant ground configurations, contradicting the fact that \(\sigma\) is non-constant.
It is natural to ask whether there is a _unique_ family \((\eta\mapsto\sigma^{k}(\eta))_{k\in\mathbb{Z}}\) of non-constant \(G^{D-1}\)-covariant ground configurations. We believe that the answer is positive under the assumptions of Theorem 1.2.
We also pose the following, somewhat related, problem. Theorem 1.9 proves that for every sequence \((\Lambda_{n})\) of finite subsets of \(\mathbb{Z}^{d}\), with \(\Lambda_{n}\supset\{-n,\ldots,n\}^{d}\) for each \(n\), it holds almost surely that \(\sigma^{\eta,\Lambda_{n},\operatorname{Dob}}\to\sigma^{\eta,\operatorname{ Dob}}\) pointwise. Are there exceptional sequences? That is, is there a _random_ sequence \((\Lambda_{n})\) of subsets of \(\mathbb{Z}^{d}\), converging to \(\mathbb{Z}^{d}\), for which, with positive probability, the pointwise convergence \(\sigma^{\eta,\Lambda_{n},\operatorname{Dob}}\to\sigma^{\eta,\operatorname{ Dob}}\) fails? We expect that the answer is negative under the assumptions of Theorem 1.7.
### Tilted interfaces
Our work investigates the interface formed in the disordered ferromagnet's ground configuration when imposing the Dobrushin boundary conditions \(\rho^{\operatorname{Dob}}_{(v,k)}=\operatorname{sign}(k-1/2)\). It is also of interest to study the interfaces formed under other boundary conditions. For instance, one may consider "tilted Dobrushin-type boundary conditions" of the form \(\rho^{\operatorname{Dob},y}_{x}:=\operatorname{sign}(x\cdot y-1/2)\), corresponding to a flat interface orthogonal to the vector \(y\in\mathbb{Z}^{D}\) (so that \(\rho^{\operatorname{Dob}}\) corresponds to \(y=(0,\ldots,0,1)\)). In analogy with predictions for the pure Ising model, we expect that whenever \(y\) is not a multiple of one of \(e_{1},\ldots,e_{D}\) (the standard basis vectors) then the fluctuations of these tilted interfaces are of the same order as those of the real-valued disordered GFF model (9.1) discussed above, except perhaps in the critical dimension \(d=4\). In particular, they are delocalized in dimensions \(d\leq 3\) and localized in dimensions \(d\geq 5\) (a discussion of some simulation results is in [1, Section 7.2.3]).
### Higher codimension surfaces
The Dobrushin interface studied in this paper may be thought of as a \(d\)-dimensional surface embedded in a \(D=d+1\) dimensional space. It is also of interest to consider surfaces of higher codimension, i.e., \(d\)-dimensional surfaces embedded in a \(D=d+n\) dimensional space. Generalizing an approach of Borgs [1], let us describe how such surfaces arise in the context of (generalized) Ising lattice gauge theories.
Let \(d,n\geq 1\) and set \(D:=d+n\). An \(m\)-face in the lattice \(\mathbb{Z}^{D}\) is a subset of the form \(x+\{0,1\}^{I}\times\{0\}^{[D]\setminus I}\) for some \(x\in\mathbb{Z}^{D}\) and \(I\subset[D]\) with \(|I|=m\) (a 0-face is a vertex, a 1-face is an edge, etc.). Denote the set of \(m\)-faces of \(\mathbb{Z}^{D}\) by \(F_{m}\). We consider Ising lattice gauge theories on \((n-1)\)-faces, defined as follows. A configuration is a function \(\sigma:F_{n-1}\to\{-1,1\}\). We also define
\[\sigma(f_{n}):=\prod_{\begin{subarray}{c}f_{n-1}\in F_{n-1}\\ f_{n-1}\subset f_{n}\end{subarray}}\sigma_{f_{n-1}} \tag{9.3}\]
for an \(n\)-face \(f_{n}\). The formal Hamiltonian is
\[H^{\text{gauge}}(\sigma):=-\sum_{f_{n}\in F_{n}}\sigma(f_{n}). \tag{9.4}\]
The _defects_ of \(\sigma\) are the \(n\)-faces \(f_{n}\) satisfying \(\sigma(f_{n})=-1\). We think of the defect set as being dual to a \(d\)-dimensional surface (e.g., for \(n=1\), the case of the standard Ising model, the defects are dual to the domain walls separating \(-1\) and \(1\)). We wish to put the model under specific, Dobrushin-like, boundary conditions which will force such a surface through the volume. To this end, write vertices of \(\mathbb{Z}^{D}\) as \(x=(v,k)\) with \(v=(v_{1},\dots,v_{d})\in\mathbb{Z}^{d}\) and \(k=(k_{1},\dots,k_{n})\in\mathbb{Z}^{n}\). The Dobrushin-like boundary conditions are then
\[\rho^{\text{surface}}_{f_{n-1}}:=\begin{cases}-1&f_{n-1}=(v,k)+C\text{ with }v\in\mathbb{Z}^{d},k=(k_{1},0,\dots,0)\text{ having }k_{1}\leq 0\text{ and }C=\{0\}^{[d+1]}\times\{0,1\}^{[d+n]\setminus[d+1]},\\ 1&\text{otherwise},\end{cases} \tag{9.5}\]
for each \(f_{n-1}\in F_{n-1}\). The important fact about this choice is that its defect set is exactly the set of \(n\)-faces \(((v,0)+\{0\}^{[d]}\times\{0,1\}^{[d+n]\setminus[d]})_{v\in\mathbb{Z}^{d}}\) (other boundary conditions inducing the same defect set are also suitable). We note that \(\rho^{\text{surface}}=\rho^{\text{Dob}}\) when \(n=1\). The problem of localization is then to decide whether the surface dual to the defects of \(\sigma\) stays localized in the infinite-volume limit with \(\rho^{\text{surface}}\) boundary conditions (i.e., to show that there are defects in the neighborhood of the origin with high probability, uniformly in the volume). Borgs [1] considered the case \(d=2,n=2\) and proved that localization occurs at low temperature (this is the so-called weak coupling regime). His results apply more generally when the (gauge) group \(\{-1,1\}\) is replaced by a finite Abelian group.
The result of Borgs [1] is analogous to the result of Dobrushin [2] in that he establishes the existence of a non-translation-invariant Gibbs measure for the non-disordered model. In our context, it is natural to consider a disordered version, with formal Hamiltonian
\[H^{\text{gauge},\eta}(\sigma):=-\sum_{f_{n}\in F_{n}}\eta_{f_{n}}\sigma(f_{n}) \tag{9.6}\]
with the \((\eta_{f_{n}})_{f_{n}\in F_{n}}\) sampled independently from a disorder distribution supported in \([0,\infty)\) (possibly satisfying additional assumptions). We highlight two special cases: (i) the case \(n=1\) is the disordered ferromagnet studied in this work, (ii) for \(d=1\), the defect surface of the finite-volume ground configuration with \(\rho^{\text{surface}}\) boundary conditions is dual to a geodesic in first-passage percolation (in finite volume) in \(\mathbb{Z}^{1+n}\).
In analogy with Question 1.1 and Question 1.6 we may ask whether the disordered model admits ground configurations with non-empty defect set and whether the ground configuration with \(\rho^{\text{surface}}\) boundary conditions induces a localized surface. Regarding the second question, we believe that the localization/delocalization behavior is mostly determined by \(d\) (though the quantitative delocalization magnitude depends also on \(n\)). In particular, we expect that when \(d\geq 3\) an analogue of our results holds for each \(n\geq 1\). Regarding the first question, it seems natural that the existence of ground configurations with non-empty defect set becomes easier as \(n\) increases. We thus expect such configurations to exist (under mild assumptions on the disorder distribution) for all \(d\geq 2\) and \(n\geq 1\). For \(d=1\), the question coincides with the open problem of whether infinite bigeodesics exist in first-passage percolation on \(\mathbb{Z}^{1+n}\), where a negative answer is expected for \(n=1\) but the situation for larger \(n\) is unclear.
### More general disorder distributions
Our main results are proved for non-atomic disorder distributions (though in the anisotropic setup atoms are allowed for \(\nu^{\perp}\)) with a strictly positive lower bound on their support, which are sufficiently concentrated in the sense that their "width", as defined in (1.5), is sufficiently small.
Our notion of width allows either for compactly-supported distributions or distributions which are Lipschitz images of the standard Gaussian distribution. In fact, our only use of the concentration properties of the distribution is through the concentration inequality of Corollary 2.3. Thus, our proof may be used for more general classes of distributions satisfying a similar concentration inequality; see [10, 11].
In addition, our proof of the localization of the Dobrushin interface (Theorem 1.7) applies also when the disorder variables \((\eta_{e})\) are sampled independently from different disorder distributions, as long as the same disorder distribution is used for edges in the same "column" \((e_{1},e_{2}\) are in the same column if \(e_{1}=e_{2}+(0,\ldots,0,k)\) for some \(k\in\mathbb{Z}\)), and our assumptions (1.11) and (1.14) (for a sufficiently small \(c_{0}>0\)) are satisfied for each pair of parallel and perpendicular distributions.
The assumption that the disorder distribution is non-atomic is imposed only to ensure the uniqueness of _finite-volume_ ground configurations. We expect suitable versions of Theorem 1.7 and Theorem 4.3 to hold also in its absence, with minor adjustments to the proofs.
We also expect the results of this paper to continue to hold for some classes of disorder distributions \(\nu\) with \(\min(\text{supp}(\nu))=0\). However, the assumption that \(\min(\text{supp}(\nu))>0\) is used more heavily in our proofs.
### Acknowledgements
The research of M.B. and R.P. is supported by the Israel Science Foundation grant 1971/19 and by the European Research Council Consolidator grant 101002733 (Transitions). Part of this work was completed while R.P. was a Cynthia and Robert Hillas Founders' Circle Member of the Institute for Advanced Study and a visiting fellow at the Mathematics Department of Princeton University. R.P. is grateful for their support.
We are grateful to Michael Aizenman, Sky Cao, Daniel S. Fisher, Reza Gheissari and David Huse for illuminating discussions. We thank Daniel Hadas and Sasha Sodin for helpful comments.
## Appendix A
For every \(A\subseteq\mathbb{Z}^{d}\), let \(\tilde{\partial}A:=\{\{u,v\}\colon(u,v)\in\partial A\}\). Recall the definition of the graph \(\mathcal{G}_{d}\) from Lemma 7.14. Following [16, 1], a set \(E\subseteq E(\mathbb{Z}^{d})\) is called a _contour_ if it is connected in \(\mathcal{G}_{d}\) and there is a finite set \(A\subseteq\mathbb{Z}^{d}\) such that \(E=\tilde{\partial}A\). A contour
is _primitive_ if it is not a disjoint union of two non-empty contours. Let \(\tilde{\mathbb{B}}_{d}:=\{A\subset\mathbb{Z}^{d}\colon A\text{ is finite and }\tilde{\partial}A\text{ is a primitive contour}\}\). Recall that the family of finite \(A\subset\mathbb{Z}^{d}\) such that both \(A\) and \(\mathbb{Z}^{d}\setminus A\) are connected in \(\mathbb{Z}^{d}\) was denoted \(\mathbb{B}_{d}\) in the proof of Proposition 6.3.
The claim of [1, Theorem 6] is that \(|\{A\in\tilde{\mathbb{B}}_{d}\colon 0\in A,\,|\partial A|=b|\leq(8d)^{2b/d}\) for every (even) integer \(b\geq 2d\). This is equivalent to (6.27) in light of the following proposition.
**Proposition A.1**.: \(\tilde{\mathbb{B}}_{d}=\mathbb{B}_{d}\)_._
Proof.: First note that if \(A\in\mathbb{B}_{d}\) then \(\tilde{\partial}A\) is connected in \(\mathcal{G}_{d}\), by Lemma 7.14, and hence \(\tilde{\partial}A\) is a contour.
Let \(A\in\tilde{\mathbb{B}}_{d}\). Since \(A\) is finite, the set \(\mathbb{Z}^{d}\setminus A\) obviously has a unique infinite connected component. Let \(\mathcal{A}\) be the collection of connected components of \(A\) and finite connected components of \(\mathbb{Z}^{d}\setminus A\). Define a partial order \(\preceq\) on the set \(\mathcal{A}\) as follows: \(C_{1}\preceq C_{2}\) if every path from \(C_{1}\) to \(\infty\) necessarily intersects \(C_{2}\). For every \(C\in\mathcal{A}\), let \(\bar{C}:=\cup_{S\preceq C}S\). It is easy to see that \(\tilde{\partial}A\) is the disjoint union of the sets \(\{\tilde{\partial}\bar{C}\}_{C\in\mathcal{A}}\) and that for every \(C\in\mathcal{A}\) it holds that \(\bar{C}\in\mathbb{B}_{d}\) and hence \(\tilde{\partial}\bar{C}\) is a contour. Since \(\tilde{\partial}A\) is a primitive contour, it follows that \(|\mathcal{A}|=1\) and hence, \(A\in\mathbb{B}_{d}\), as \(A\) is finite. Thus \(\tilde{\mathbb{B}}_{d}\subseteq\mathbb{B}_{d}\).
By way of contradiction, assume there is \(A\in\mathbb{B}_{d}\setminus\tilde{\mathbb{B}}_{d}\). Since \(A\in\mathbb{B}_{d}\), the set \(\tilde{\partial}A\) is a contour. Therefore, since \(A\notin\tilde{\mathbb{B}}_{d}\), it follows that \(\tilde{\partial}A\) is not primitive, i.e., it is the disjoint union of two non-empty contours. In particular, there is a finite \(A_{0}\subset\mathbb{Z}^{d}\) such that \(\emptyset\neq\tilde{\partial}A_{0}\subsetneq\tilde{\partial}A\). Since \(A\) and \(A_{0}\) are both finite, the set \((\mathbb{Z}^{d}\setminus A)\cap(\mathbb{Z}^{d}\setminus A_{0})\) is infinite and in particular, non-empty. Hence, since \((\mathbb{Z}^{d}\setminus A)\) is connected and \(\tilde{\partial}A_{0}\subset\tilde{\partial}A\), it follows that \((\mathbb{Z}^{d}\setminus A)\subseteq(\mathbb{Z}^{d}\setminus A_{0})\), i.e., \(A_{0}\subseteq A\). The set \(A\cap A_{0}=A_{0}\) is non-empty, since \(\tilde{\partial}A_{0}\neq\emptyset\). Hence, since \(A\) is connected and \(\tilde{\partial}A_{0}\subset\tilde{\partial}A\), it follows that \(A_{0}\supseteq A\). Therefore, \(A_{0}=A\), in contradiction with \(\tilde{\partial}A_{0}\neq\tilde{\partial}A\).
## Appendix B
In this section Lemma 1.5, Observation 4.1, Lemma 4.2, and Lemma 6.25 will be proved.
The following observation will be used in the proof of Lemma 6.25 as well as Lemma 4.2.
**Observation B.1**.: _Let \(\eta=f(X)\) be Lipschitz function \(f\) of a normalized Gaussian random variable \(X\sim N(0,1)\). Then \(\eta\) has exponential moments, i.e., \(\mathbb{E}(e^{\eta})<\infty\)._
Proof.: Denote by \(C>0\) a constant such that \(f\) is \(C\)-Lipschitz. Then the following holds:
\[\mathbb{E}(e^{\eta}) =\frac{1}{\sqrt{2\pi}}\int_{x=-\infty}^{\infty}e^{f(x)}e^{-\frac{ x^{2}}{2}}dx=\frac{1}{\sqrt{2\pi}}\int_{x=0}^{\infty}\left(e^{f(x)}+e^{f(-x)} \right)e^{-\frac{x^{2}}{2}}dx\] \[\leq\frac{e^{|f(0)|}}{\sqrt{2\pi}}\int_{x=0}^{\infty}\left(e^{|f( x)-f(0)|}+e^{|f(-x)-f(0)|}\right)e^{-\frac{x^{2}}{2}}dx\leq\frac{e^{|f(0)|}}{ \sqrt{2\pi}}\int_{x=0}^{\infty}e^{Cx}e^{-\frac{x^{2}}{2}}dx\] \[=\frac{e^{|f(0)|+\frac{C^{2}}{2}}}{\sqrt{2\pi}}\int_{x=0}^{\infty} e^{-\frac{(x-C)^{2}}{2}}dx<\frac{e^{|f(0)|+\frac{C^{2}}{2}}}{\sqrt{2\pi}}\int_{x=- \infty}^{\infty}e^{-\frac{(x-C)^{2}}{2}}dx=e^{|f(0)|+\frac{C^{2}}{2}},\]
where the first equality is by definition of \(\eta\), the first inequality by triangle inequality, the second by definition of \(C\)-Lipschitz function.
**Proposition B.2**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) finite. Suppose the disorder distributions \(\nu^{\parallel},\nu^{\perp}\) satisfies the condition in (1.11). For every integer \(h\) and positive integer \(M\):_
\[\mathbb{P}\left(\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}\geq M\right)\leq \beta e^{-M},\] (B.1)
_where \(\beta=\beta(|\Lambda|,\nu^{\parallel})\) is a constant which depends on \(|\Lambda|,\nu^{\parallel}\). In particular, for \(h=0\), it holds for every positive integer \(M\) that_
\[\mathbb{P}\left(\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\geq M\right) \leq\beta e^{-M/2}.\] (B.2)
_Consequently, by the first Borel Cantelli lemma it holds that the configuration \(\rho^{\mathrm{Dob}}\) has finite energy a.s and so the ground energy is a.s finite._
Proof.: Defining \(\beta(|\Lambda|,\nu^{\parallel}):=\mathbb{E}(e^{X})^{|\Lambda|}<\infty\), for \(X\sim\nu^{\parallel}\) and where the expectation is finite trivially for the compact case, and by Observation B.1 for the Lipschitz of Gaussian case. By the i.i.d nature of the disorders the following holds
\[\mathbb{P}\left(\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}\geq M\right) =\mathbb{P}\left(e^{\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}} \geq e^{M}\right)\leq\frac{\mathbb{E}\left(e^{\sum_{v\in\Lambda}\eta_{\{(v,h),(v,h+1)\}}}\right)}{e^{M}}\] \[=\frac{\prod_{v\in\Lambda}\mathbb{E}(e^{\eta_{\{(v,h),(v,h+1)\}} })}{e^{M}}=\beta e^{-M}.\qed\]
The proposition below states that from the appearance of a sign change in a ground configuration in height \(k\) one may deduce a lower bound on the energy of it.
**Proposition B.3**.: _Let \(\eta:E(\mathbb{Z}^{d+1})\to[0,\infty)\) such that \(\eta_{e}\geq\alpha^{\perp}\) for every \(e\in E^{\perp}(\mathbb{Z}^{d+1})\). Let \(\Lambda\subset\mathbb{Z}^{d}\) be finite and let \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\) such that both the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\) and \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) are connected. If \((u,h)\in\Lambda\times\mathbb{Z}\) such that \(\sigma_{(u,h)}\neq\rho^{\mathrm{Dob}}_{(u,h)}\) then \(\mathcal{H}^{\eta,\Lambda}(\sigma)\geq 2\alpha^{\perp}|h|\)._
Proof.: The proof is similar to (though considerably simpler than) the argument used to prove Lemma 4.4.
With no loss of generality, assume that \(h>0\) and that \(h=\max\{k\in\mathbb{Z}\colon\sigma_{(u,k)}=-1\}\).
Recall the definition of the graph \(\mathcal{G}_{d+1}\) from Lemma 7.14. Define \(\pi_{d+1}:\mathbb{Z}^{d}\times\mathbb{Z}\to\mathbb{Z}\) by \(\pi_{d+1}(u,h)=h\), and for every \(\{x,y\}\in E(\mathbb{Z}^{d+1})\), let \(\pi_{d+1}(\{x,y\})=\{\pi_{d+1}(x),\pi_{d+1}(y)\}\). Note that if \(e,\tilde{e}\in E(\mathbb{Z}^{d+1})\) are adjacent in \(\mathcal{G}_{d+1}\), then \(\pi_{d+1}(e)\subseteq\pi_{d+1}(\tilde{e})\) or \(\pi_{d+1}(e)\supseteq\pi_{d+1}(\tilde{e})\).
By Lemma 7.14. the set
\[\mathcal{I}(\sigma):=\{\{x,y\}\in E(\mathbb{Z}^{d+1})\colon\sigma(x)=1,\sigma (y)=-1\}\]
is connected in \(\mathcal{G}_{d+1}\). In particular, there is a sequence \((\tilde{e}_{i})_{i=1}^{N}\) of edges in \(\mathcal{I}(\sigma)\) such that \(\tilde{e}_{1}=\{(u,h+1),(u,h)\}\), \(\tilde{e}_{N}=\{(w,1),(w,0)\}\) for some \(w\in\mathbb{Z}^{d}\setminus\Lambda\) and \(\tilde{e}_{i},\tilde{e}_{i+1}\) are adjacent in \(\mathcal{G}_{d+1}\) for every \(1\leq i<N\).
Since \(\pi_{d+1}(\tilde{e}_{1})=\{h+1,h\}\), \(\pi_{d+1}(\tilde{e}_{N})=\{1,0\}\) and for every \(1\leq i<N\) it holds that \(\pi_{d+1}(\tilde{e}_{i-1})\subseteq\pi_{d+1}(\tilde{e}_{i})\) or \(\pi_{d+1}(\tilde{e}_{i-1})\supseteq\pi_{d+1}(\tilde{e}_{i})\), it follows that for every \(1\leq k\leq h\) there is necessarily \(1<i_{h}<N\) such that \(\pi_{d+1}(\tilde{e}_{i_{k}})=\{k\}\). Then, for every \(1\leq k\leq h\) it holds that \(\tilde{e}_{i_{k}}\in\mathcal{I}(\sigma)\cap E^{\perp}(\mathbb{Z}^{d+1})\) and \(\tilde{e}_{i_{k}}\cap(\Lambda\times\mathbb{Z})\neq\emptyset\), and hence
\[\mathcal{H}^{\eta,\Lambda}(\sigma)=2\sum_{\begin{subarray}{c}e\in\mathcal{I}( \sigma)\\ e\cap(\Lambda\times\mathbb{Z})\neq\emptyset\end{subarray}}\eta_{e}\geq 2\sum_{k=1}^{h} \eta_{\tilde{e}_{i_{k}}}\geq 2h\alpha^{\perp}.\qed\]
Proof of Lemma 6.25.: First notice that for any \(M>0\)
\[\mathbb{P}\Big{(}G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp})}(\tau) \neq G^{\eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)\Big{)}\] \[\leq \mathbb{P}\left(\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\right)\] \[+\mathbb{P}\left(\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta^{\tau})\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau ),(b^{\parallel},b^{\perp})}(\eta^{\tau})\right)\] \[= 2\mathbb{P}\left(\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^ {\parallel},b^{\perp})}(\eta)\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau),(b^ {\parallel},b^{\perp})}(\eta)\right),\]
where the inequality is by union bound and equality afterwards by the fact the disorder is i.i.d and \(\tau\) is fixed.
The set \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\) obviously has a unique infinite connected component for every \(\sigma\in\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{\perp}), \mathrm{Dob}}\). If the set \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\) has finite connected components then flipping all signs in such a component yields a configuration in \(\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{\perp}),\mathrm{Dob}}\) with smaller \(H^{\eta}\). Hence, if \(\sigma_{0}\in\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{\perp}), \mathrm{Dob}}\) is a configuration minimizing \(\mathcal{H}^{\eta,\Lambda}\), then the set \(\{x\in\mathbb{Z}^{d+1}\colon\sigma_{0}(x)=1\}\) is necessarily connected, and similarly, the set \(\{y\in\mathbb{Z}^{d+1}\colon\sigma_{0}(y)=1\}\) is connected as well. Hence, if \(\sigma_{0}\notin\Omega^{\Delta_{M},\mathrm{supp}(\tau),(b^{\parallel},b^{\perp }),\mathrm{Dob}}\) then \(\mathcal{H}^{\eta,\Lambda}(\sigma)\geq 2\alpha^{\perp}(M+1)\), by Proposition B.3. It follows that
\[\{\eta:\mathrm{GE}^{\Delta_{M},\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\neq\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau)(b^{ \parallel},b^{\perp})}(\eta)\}\\ \subseteq\{\eta:\mathrm{GE}^{\Lambda,\mathrm{supp}(\tau),(b^{ \parallel},b^{\perp})}(\eta)\geq 2\underline{\alpha}^{\perp}(M+1)\} \subseteq\{\eta:\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\geq 2 \underline{\alpha}^{\perp}(M+1)\}\]
where the second inclusion is by definition of ground energy, since \(\rho^{\mathrm{Dob}}\in\Omega^{\Lambda,\mathrm{supp}(\tau),(b^{\parallel},b^{ \perp})}\). And so by the above inclusion, and (B.2),
\[\mathbb{P}\left(G^{\eta,\Delta_{M},(b^{\parallel},b^{\perp})}(\tau)\neq G^{ \eta,\Lambda,(b^{\parallel},b^{\perp})}(\tau)\right)\leq 2\beta e^{- \underline{\alpha}^{\perp}(M+1)}\xrightarrow[M\to\infty]{}0\]
as required.
Proof of Observation 4.1.: By definition of \(\Omega^{\Lambda,\mathrm{Dob}}\) the configurations \(\sigma,\sigma^{\prime}\) may only differ in finitely many places, and so the difference \(H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})\) is well defined and denoting \(D=\{x:\sigma_{x}\neq\sigma_{x}^{\prime}\}\), both
\[H^{\eta}(\sigma)-H^{\eta}(\sigma^{\prime})=-2\sum_{\{x,y\}\in\partial D}\eta_{ \{x,y\}}\sigma_{x}\sigma_{y}\]
and also
\[\mathcal{H}^{\eta,\Lambda}(\sigma)-\mathcal{H}^{\eta,\Lambda}(\sigma^{\prime}) =\sum_{\{x,y\}}\eta_{\{x,y\}}\left(\sigma_{x}^{\prime}\sigma_{y}^{ \prime}-\sigma_{x}\sigma_{y}\right))=-2\sum_{\{x,y\}\in\partial D}\eta_{\{x,y \}}\sigma_{x}\sigma_{y}.\qed\]
Proof of Lemma 1.5.: Let \(M\) be an integer larger than \(\frac{1}{\underline{\alpha}^{\perp}}\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{ Dob}})\) (which is finite by Proposition B.2). Let \(\Delta_{M}:=\Lambda\times\{-M,\ldots,M\}\). The function \(\mathcal{H}^{\eta,\Lambda}\) is well defined on the finite \(\Omega^{\Delta_{M},\rho^{\mathrm{Dob}}}\subset\Omega^{\Lambda,\mathrm{Dob}}\). Thus, there is a ground configuration \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) in \(\Omega^{\Delta_{M},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(\mathcal{H}^{\eta,\Lambda}\), which is unique by condition (4.1). By Observation 4.1, \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) is also the unique ground configuration in \(\Omega^{\Delta_{M},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(H^{\eta}\).
Consider a configuration \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\setminus\Omega^{\Delta_{M},\rho^{ \mathrm{Dob}}}\). Each of the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\), \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) obviously has a unique infinite connected component. If either of the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\), \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) has a finite connected component,
then flipping all signs in such a component yields a configuration \(\tilde{\sigma}\in\Omega^{\Lambda,\mathrm{Dob}}\) such that \(H^{\eta}(\sigma)-H^{\eta}(\tilde{\sigma})>0\). If neither of the sets \(\{x\in\mathbb{Z}^{d+1}\colon\sigma(x)=1\}\), \(\{y\in\mathbb{Z}^{d+1}\colon\sigma(y)=-1\}\) has finite connected components, then both sets are connected and hence, by Observation 4.1 and Proposition B.3,
\[H^{\eta}(\sigma)-H^{\eta}(\rho^{\mathrm{Dob}})=\mathcal{H}^{\eta,\Lambda}( \sigma)-\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})\geq\underline{\alpha} ^{\perp}(M+1)-\mathcal{H}^{\eta,\Lambda}(\rho^{\mathrm{Dob}})>0.\]
Therefore, no \(\sigma\in\Omega^{\Lambda,\mathrm{Dob}}\setminus\Omega^{\Delta_{M},\rho^{ \mathrm{Dob}}}\) is a ground configuration in \(\Omega^{\Lambda,\mathrm{Dob}}\) and hence \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) is also the unique ground configuration in \(\Omega^{\Lambda,\mathrm{Dob}}\).
Finally, since every configuration that differs from \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) in finitely many places is necessarily in \(\Omega^{\Lambda,\mathrm{Dob}}\), it follows that \(\sigma^{\eta,\Delta_{M},\mathrm{Dob}}\) is also a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\); it is also almost surely unique, since almost surely no configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\setminus\Omega^{\Lambda,\mathrm{Dob}}\) is a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(H^{\eta}\), as we will prove next.
Take \(\gamma>\ln\beta(|\Lambda|,\nu^{\parallel})\) (see Proposition B.2), and let
\[J:=\{j\in\mathbb{Z}:\sum_{v\in\Lambda}\eta_{\{(v,j),(v,j+1)\}}<\gamma\}.\]
We will show that if the set \(J\) is infinite, then no configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\setminus\Omega^{\Lambda,\mathrm{Dob}}\) is a ground configuration in \(\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\) with respect to the Hamiltonian \(H^{\eta}\). Since the disorder \(\eta\) is an independent field, it readily follows from (B.1) that \(J\) is almost surely infinite, and we are done.
Therefore, assume that the set \(J\) is infinite and let \(\sigma\in\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\setminus\Omega^ {\Lambda,\mathrm{Dob}}\). Then, the set \(I:=\{i\in\mathbb{Z}\colon\exists u\in\mathbb{Z}^{d}\text{ such that }\sigma_{(u,i)}\neq\rho^{\mathrm{Dob}}_{(u,i)}\}\) is infinite, and hence there are \(0\leq j_{1}<j_{2}\) or \(j_{1}<j_{2}\leq 0\) in \(J\) such that \(|I\cap[j_{1}+1,j_{2}]|>\frac{\gamma}{\underline{\alpha}^{\perp}d}\). Consider the configuration \(\tilde{\sigma}\), defined as follows: for every \(u\in\mathbb{Z}^{d}\), \(\tilde{\sigma}_{(u,i)}=\rho^{\mathrm{Dob}}_{(u,i)}\) if \(j_{1}+1\leq i\leq j_{2}\) and \(\tilde{\sigma}_{(u,i)}=\sigma_{(u,i)}\) otherwise. Clearly, \(\tilde{\sigma}\in\Omega^{\Lambda\times\mathbb{Z},\rho^{\mathrm{Dob}}}\), \(\{x\in\mathbb{Z}^{d+1}\colon\tilde{\sigma}_{x}\neq\sigma_{x}\}\) is finite and
\[H^{\eta}(\sigma)-H^{\eta}(\tilde{\sigma})\geq 4\underline{\alpha}^{\perp}d|I\cap[j_{1}+1,j_{2}]|-2\sum_{v\in \Lambda}\eta_{\{(v,j_{1}),(v,j_{1}+1)\}}-2\sum_{v\in\Lambda}\eta_{\{(v,j_{2}),(v,j_{2}+1)\}}\] \[> 4\gamma-4\gamma=0.\]
Hence, \(\sigma\) is not a ground configuration, as desired.
Proof of Lemma 4.2.: First notice that for any \(\tau\in\mathcal{S}\cap\{\tau:\max_{u\in\Lambda}|\tau(u)|=r\}\) it holds that \(\mathrm{TV}(\tau)\geq 2dr\), and so
\[\{\eta:\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp}) \}\subset\{\eta:|G^{\eta,\Lambda}(\tau)|\geq d\alpha^{\perp}\}\] (B.3)
by the definition of \((\alpha^{\parallel},\alpha^{\perp})\)-admissibility.
Now use union bound and (B.3) to get
\[\mathbb{P}\left(\exists\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{ \parallel},\alpha^{\perp})\colon\max_{u\in\Lambda}|\tau(u)|=r\right)\leq(2r+1)^ {|\Lambda|}\max_{\begin{subarray}{c}r\in\mathcal{S}\\ \max_{u\in\Lambda}|\tau(u)|=r\end{subarray}}\mathbb{P}(\tau\in\mathcal{AS})\] \[\quad\leq 2(2r+1)^{|\Lambda|}\mathbb{P}\left(|G^{\eta,\Lambda}(\tau) |\geq d\alpha^{\perp}\right)\leq 2(2r+1)^{|\Lambda|}\mathbb{P}\left(\mathrm{GE}^{ \Lambda,\mathrm{Dob}}(\eta)\geq d\alpha^{\perp}\right)\] \[\quad\leq 2(2r+1)^{|\Lambda|}\mathbb{P}\left(\mathcal{H}^{\eta, \Lambda}(\rho^{\mathrm{Dob}})\geq d\alpha^{\perp}\right)\leq\beta 2(2r+1)^{|\Lambda|}e^{-d \alpha^{\perp}/2}\]
where the first inequality is by union bound, the second is by (B.3), the third is by the fact the parallel disorder is i.i.d, the forth is by definition of ground energy, and the fifth by (B.2).
Noticing that by the above \(\mathbb{P}\left(\exists\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha ^{\perp})\colon\max_{u\in\Lambda}|\tau(u)|=r\right)\) is summable with respect to \(r\) and applying Borel-Cantelli one gets that with probability one finitely many of the events
\[\{\exists\tau\in\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp })\colon\max_{u\in\Lambda}|\tau(u)|=r\}\]
and in particular with probability one \(|\mathcal{AS}^{\eta,\Lambda}(\alpha^{\parallel},\alpha^{\perp})|<\infty\) as required.
|
2302.14341 | Generalization of the Kuramoto model to the Winfree model by a symmetry
breaking coupling | We construct a nontrivial generalization of the paradigmatic Kuramoto model
by using an additional coupling term that explicitly breaks its rotational
symmetry resulting in a variant of the Winfree Model. Consequently, we observe
the characteristic features of the phase diagrams of both the Kuramoto model
and the Winfree model depending on the degree of the symmetry breaking coupling
strength for unimodal frequency distribution. The phase diagrams of both the
Kuramoto and the Winfree models resemble each other for symmetric bimodal
frequency distribution for a range of the symmetry breaking coupling strength
except for region shift and difference in the degree of spread of the
macroscopic dynamical states and bistable regions. The dynamical transitions in
the bistable states are characterized by an abrupt (first-order) transition in
both the forward and reverse traces. For asymmetric bimodal frequency
distribution, the onset of bistable regions depends on the degree of the
asymmetry. Large degree of the symmetry breaking coupling strength promotes the
synchronized stationary state, while a large degree of heterogeneity,
proportional to the separation between the two central frequencies, facilitates
the spread of the incoherent and standing wave states in the phase diagram for
a low strength of the symmetry breaking coupling. We deduce the low-dimensional
equations of motion for the complex order parameters using the Ott-Antonsen
ansatz for both unimodal and bimodal frequency distributions. We also deduce
the Hopf, pitchfork, and saddle-node bifurcation curves from the evolution
equations for the complex order parameters mediating the dynamical transitions.
Simulation results of the original discrete set of equations of the generalized
Kuramoto model agree well with the analytical bifurcation curves. | M. Manoranjani, Shamik Gupta, D. V. Senthilkumar, V. K. Chandrasekar | 2023-02-28T06:25:13Z | http://arxiv.org/abs/2302.14341v1 | # Generalization of the Kuramoto model to the Winfree model by a symmetry breaking coupling
###### Abstract
We construct a nontrivial generalization of the paradigmatic Kuramoto model by using an additional coupling term that explicitly breaks its rotational symmetry resulting in a variant of the Winfree Model. Consequently, we observe the characteristic features of the phase diagrams of both the Kuramoto model and the Winfree model depending on the degree of the symmetry breaking coupling strength for unimodal frequency distribution. The phase diagrams of both the Kuramoto and the Winfree models resemble each other for symmetric bimodal frequency distribution for a range of the symmetry breaking coupling strength except for region shift and difference in the degree of spread of the macroscopic dynamical states and bistable regions. The dynamical transitions in the bistable states are characterized by an abrupt (first-order) transition in both the forward and reverse traces. For asymmetric bimodal frequency distribution, the onset of bistable regions depends on the degree of the asymmetry. Large degree of the symmetry breaking coupling strength promotes the synchronized stationary state, while a large degree of heterogeneity, proportional to the separation between the two central frequencies, facilitates the spread of the incoherent and standing wave states
in the phase diagram for a low strength of the symmetry breaking coupling. We deduce the low-dimensional equations of motion for the complex order parameters using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We also deduce the Hopf, pitchfork, and saddle-node bifurcation curves from the evolution equations for the complex order parameters mediating the dynamical transitions. Simulation results of the original discrete set of equations of the generalized Kuramoto model agree well with the analytical bifurcation curves.
Keywords:Kuramoto model, Winfree model, Bifurcation, Asymmetry bimodal distrubution.
## 1 Introduction
Symmetry (translational or rotational) prevailing in the coupled dynamical networks due to the coupling geometry manifests in a wide variety of natural systems and in their intriguing macroscopic dynamical states [1]. Nevertheless, symmetry breaking couplings are shown to be a source of a plethora of collective dynamical behavior that are inherent to it and are mostly inaccessible with the symmetry preserving couplings. In particular, networks of the paradigmatic Stuart-Landau oscillators with symmetry breaking coupling have been employed to unravel several collective dynamical states that mimic a variety of collective patterns observed in nature and technology. For instance, symmetry breaking coupling facilitates the transition from the homogeneous to an inhomogeneous steady states [2], symmetry breaking interaction has been identified as an essential feature for the genesis of partially coherent inhomogeneous spatial patterns, namely chimera death state [3; 4; 5]. Multicluster oscillation death states have been observed in nonlocally coupled Stuart-Landau oscillators with symmetry breaking coupling [6]. Further, the interplay of the nonisochronicity parameter and the symmetry breaking coupling is found to facilitate the onset of different variants of chimera death state such as multichimera death state and periodic chimera death states in nonlocally coupled Stuart-Landau oscillators [7]. The effect of the symmetry breaking coupling has also been investigated on the phenomenon of reviving oscillations [8]. Recently, the effect of the symmetry breaking mean-field coupling on the phenomenon of the aging transition has also been investigatedConjugate couplings, a symmetry breaking coupling, have also been widely employed in the literature [9; 10; 11]. Note that the pointed out reports are only a tip of an ice-berg and not an exhaustive list of studies that employed symmetry breaking coupling using the network of the Stuart-Landau oscillators.
Despite the substantial investigations on the effect of the symmetry breaking coupling in networks of Stuart-Landau oscillators, there is a lacunae in understanding the nontrivial role of the symmetry breaking coupling in the phase only models, which indeed allows for exact analytical treatment of the
macroscopic dynamical states in most cases. In particular, phase models such as Winfree and Kuramoto models, and their variants have been extensively employed in the literature to investigate the emergence of various intriguing collective dynamical states. Interaction among the phase oscillators in the Winfree model is modeled by a phase-dependent pulse function and a sensitive function. The former characterizes the mean-field, while the latter characterizes the response of the individual oscillators to the mean-field [12; 13]. Winfree model is one of its kind representing a class of pulse-coupled biological oscillators such as flashing of fire files [14], applauding audience [15] and many more. Interaction among the phase oscillators in the Kuramoto model is modeled by the sine of difference between the phases of the oscillator and has been widely employed to investigate the emergence of spontaneous synchronization in a wide variety of biological, chemical, mechanical and physical, systems [16; 17; 18]. Examples include cardiac pacemaker [19], Josephson junction arrays [20], and power-grids [21].
A recent study has generalized the Kuramoto model by including an additional interaction term that breaks the rotational symmetry of the dynamics explicitly and unveiled a rich phase diagram with stationary and standing wave phases due to the symmetry breaking interaction [22]. Specifically, the authors have considered unimodal frequency distributions and revealed the emergence of a stationary state, characterized by time independent amplitude and phase of the complex Kuramoto order parameter, facilitated by the symmetry breaking interaction, which is otherwise absent in the original Kuramoto model that allows for the rotational symmetry of the dynamics. Interesting, in this work, we elucidate that the Kuramoto model can be translated into the Winfree model by the introduction of the additional symmetry breaking coupling and consequently, one can obtain the phase diagrams of both these models simply by tuning the symmetry breaking parameter \(q\), thereby bridging the dynamics of both the models. Note that the macroscopic dynamical states of the pulse coupled biological oscillators with different sensitive functions, characterizing the phase-response-curves of biological oscillators, are peculiar to the Winfree model and its generalizations, which are far from reach for the Kuramoto model and its variants. In particular, we consider both the unimodal and bimodal frequency distributions to explore the phase diagrams for various values of the symmetry breaking parameter \(q\). On the one hand, we observe the typical phase diagram of the Kuramoto model characterized only by incoherent and standing wave states in the absence of the symmetry breaking interaction for the unimodal frequency distribution. On the other hand, we observe the phase diagram with incoherent state, standing wave pattern along with the synchronized stationary state and bistabilities among them, a typical nature of the Winfree model, for \(q=1\). For an intermediate and increasing value of \(q\in(0,1)\), one can find the onset of the stationary state and eventually the emergence of bistability among these states in the phase diagram, and enlargement of the bistable regions resulting in the phase diagram of the Winfree model.
All three states are also observed in both Kuramoto and Winfree models for symmetric bimodal frequency distributions along with the region of bistability. The degree of the spread of the different macroscopic dynamical states depends on the strength of the symmetry breaking parameter \(q\). Interestingly, for asymmetric bimodal frequency distributions, increase in the degree of asymmetry of the frequency distributions favors the onset of bistable regions even for a rather low values of \(q\), which otherwise cannot be observed with the symmetric bimodal and unimodal frequency distributions. We arrive at the phase diagrams by numerical simulation of the original equations of motion. We deduce the reduced low-dimensional evolution equations of motion for the order parameter using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We also deduce the Hopf, pitchfork and saddle-node bifurcation curves from the governing equations of motion for the order parameters, which mediates the dynamical transitions in the phase diagrams. Homoclinic bifurcation curve is obtained from the XPPAUT software.
The plan of the paper is as follows. In Sec. II, we generalize the Kuramoto model by introducing a symmetry breaking coupling and elucidate that the latter bridges the Kuramoto model and the Winfree model. We deduce the reduced low-dimensional evolution equations for the complex order parameters corresponding to the discrete set of generalized Kuramoto model using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions in Sec. III. We also deduce Hopf, pitchfork and saddle-node bifurcation curves from the evolution equations for the complex order parameters in Sec. III, mediating the dynamical transitions among the incoherent, standing wave and synchronized stationary states. In Sec. IV, we discuss the observed dynamical states and their transitions in the various phase diagrams. Finally, in Sec. VI, we summarize the results.
## 2 Model
We consider a nontrivial generalization of the Kuramoto model by including an interaction term that explicitly breaks the rotational symmetry of the dynamics [22]. The phase \(\theta_{i}\) is governed by the set of N ordinary differential equations (ODEs),
\[\dot{\theta}_{i}=\omega_{i}+\frac{\varepsilon}{N}\sum_{j=1}^{N}\big{[}\sin( \theta_{j}-\theta_{i})+q\sin(\theta_{j}+\theta_{i})\big{]}, \tag{1}\]
for \(i=1,\ldots,N\), where \(N\gg 1\). Here \(\theta_{i}(t)\) is the phase of the \(i\)th oscillator at time \(t\), \(\varepsilon\geq 0\) is the coupling strength, and \(q\) is the strength of the symmetry breaking coupling. Note that Eq. (1) reduces to the Kuramoto model by setting \(q=0\) and on identifying \(\varepsilon\) with the parameter \(K>0\). Equation (1) can also be viewed as a variant of the celebrated Winfree model [23; 24; 25; 26] when \(q=1\)
The Winfree model takes the form
\[\dot{\theta_{i}}=\omega_{i}+Q(\theta_{i})\sum_{j=1}^{N}P(\theta_{j}), \tag{2}\]
where \(P(\theta_{j})\) is the phase dependent pulse function and the functional form of the response function \(Q(\theta)\) characterizes the phase-response curves of certain biological oscillators. From Eq. (1), it easy to recognize that \(Q(\theta)=-2\sin(\theta)\) and \(P(\theta)=\cos(\theta)\). It is also evident that the symmetry breaking parameter 'q' bridges the Kuramoto and the Winfree models. Equation (1) corresponds to the Kuramoto model when \(q=0\) and it corresponds to a variant of the Winfree model when \(q=1\), as in Eq. (2). We consider the frequencies of the phase-oscillators are distributed both by the unimodal Lorentzian distribution given as
\[g(\omega)=\frac{\gamma}{\pi((\omega-\omega_{0})^{2}+\gamma^{2})};\quad\gamma>0, \tag{3}\]
and bimodal Lorentzian distribution represented as
\[g(\omega)=\frac{1}{\pi}\left[\frac{\gamma_{1}}{((\omega-\omega_{0})^{2}+ \gamma_{1}^{2})}+\frac{\gamma_{2}}{((\omega+\omega_{0})^{2}+\gamma_{2}^{2})} \right];\quad\quad\quad\gamma_{1},\gamma_{2}>0. \tag{4}\]
Here \(\gamma\), \(\gamma_{1}\) and \(\gamma_{2}\) are the width parameter (half width at half maximum) of the Lorentzian and \(\pm\omega_{0}\) are their central frequencies. Note that \(\omega_{0}\) corresponds to the degree of detuning in the system, which is proportional to the separation between the two central frequencies. Note that the bimodal distribution \(g(\omega_{0})\) is symmetric about zero when \(\gamma_{1}=\gamma_{2}\). It is also to be noted that \(g(\omega_{0})\) in Eq. (4) is bimodal if and only if the separation between their central frequencies are sufficiently greater than their widths. To be precise, it is required that \(\omega_{0}>\gamma_{1,2}/\sqrt{3}\) for the distribution to be a bimodal, otherwise the classical results of the unimodal distribution holds good.
Heterogeneity in the frequency distribution plays a crucial role in the manifestation of a plethora of collective dynamics in a vast variety of natural systems. In particular, coexisting co-rotating and counter-rotating systems characterized by positive and negative frequencies, respectively, are wide spread in nature. For instance, counter-rotating spirals are observed in protoplasm of the Physarum plasmodium [27], counter-rotating vortices are inevitable in the atmosphere and ocean [28; 29; 30], in magnetohydrodynamics of plasma flow [31], Bose-Einstein condensates [32; 33], and in other physical systems [34; 35; 36]. Very recently, the counter-rotating frequency induced dynamical effects were also reported in the coupled Stuart-Landau oscillator with symmetry preserving as well as symmetry breaking couplings [37]. The coexistence of co-rotating and counter-rotating oscillators was initially identified by Tabor [38], which is followed by a series of work employing co-rotating and counter-rotating oscillators. All these physical systems strongly suggest
that counter-rotating time-evolving dynamical systems indeed exist in nature and play a pertinent role in the manifestation of their intriguing collective dynamics.
In the following, we will deduce the low-dimensional evolution equations for the complex macroscopic order parameters corresponding to both the unimodal and bimodal frequency distributions using the Ott-Antonsen (OA) ansatz [39; 40]. Subsequently, we also deduce the various bifurcation curves facilitating the dynamical transitions among the observed dynamical states in the phase diagrams.
## 3 Low-dimensional evolution equations for the macroscopic order parameters
We now provide an analysis of the dynamics (1), in the limit \(N\rightarrow\infty\), by invoking the Ott-Antonsen ansatz. In this limit, the dynamics of the discrete set of equations (1) can be captured by the probability distribution function \(f(\theta,\omega,t)\), defined such that \(f(\theta,\omega,t)\mathrm{d}\theta\) gives the probability of oscillators with phase in the range \([\theta,\theta+\mathrm{d}\theta]\) at time \(t\). The distribution is \(2\pi\)-periodic in \(\theta\) and obeys the normalization
\[\int_{0}^{2\pi}\mathrm{d}\theta\ f(\theta,\omega,t)=g(\omega)\ \forall\ \omega. \tag{5}\]
Since the dynamics (1) conserves the number of oscillators with a given \(\omega\), the time evolution of \(f\) follows the continuity equation
\[\frac{\partial f}{\partial t}+\frac{\partial(fv)}{\partial\theta}=0, \tag{6}\]
where \(v(\theta,\omega,t)\) is the angular velocity of the oscillators. From Eq. (1), we have,
\[v(\theta,\omega,t)=\omega+\frac{\varepsilon}{2i}[(ze^{-i\theta}-z^{\star}e^{ i\theta})+q(ze^{i\theta}-z^{\star}e^{-i\theta})], \tag{7}\]
where \(z^{\star}\) denotes the complex conjugate of the macroscopic order parameter defined as
\[z=\int_{-\infty}^{\infty}g(\omega)\int_{0}^{2\pi}f(\theta,\omega,t)e^{i\theta }d\theta d\omega. \tag{8}\]
Now, \(f(\theta,\omega,t)\) can be expanded in terms of Fourier series of the form
\[f(\theta,\omega,t)=\frac{g(\omega)}{2\pi}\left[1+\sum_{n=1}^{\infty}\left( \alpha_{n}(\omega,t)e^{in\theta}+\mathrm{c.c.}\right)\right], \tag{9}\]
where, \(\alpha_{n}(\omega,t)\) is the \(n\)th Fourier coefficient, while c.c. denotes complex conjugation of the preceding sum within the brackets. The normalization condition in (5) is satisfied by the presence of the prefactor of \(g(\omega)\) in (9). The
Ott-Antonsen ansatz consists in assuming [39; 40]
\[\alpha_{n}(\omega,t)=\left[\alpha(\omega,t)\right]^{n}. \tag{10}\]
Now, it is straightforward to obtain
\[\frac{\partial\alpha}{\partial t}+i\omega\alpha+\frac{\varepsilon_{1}}{2} \left[(z\alpha^{2}-z^{\star})+q(z-z^{\star}\alpha^{2})\right], \tag{11}\]
where,
\[z^{\star}=\int_{-\infty}^{\infty}\alpha(t,\omega)g(\omega)d\omega. \tag{12}\]
### Unimodal Distribution
Substituting the partial fraction expansion of the unimodal frequency distribution \(g(\omega)\) (3) in Eq. (12) and evaluating the integral using an appropriate contour integral, one can obtain the order parameter as
\[z(t)=a^{\star}(\omega_{0}-i\gamma,t). \tag{13}\]
From (11) and (13), one can obtain the evolution equation for the complex order parameter as
\[\frac{\partial z}{\partial t}-i(\omega_{0}+i\gamma)z+\frac{\varepsilon_{1}}{2 }\bigg{[}\big{[}\mathsf{z}^{2}z-z\big{]}+q\big{[}z^{\star}-z^{3}\big{]}\bigg{]} =0. \tag{14}\]
The above evolution equation for the complex order parameter \(z(t)=r(t)e^{i\psi(t)}\) can be expressed in terms of the evolution equations in \(r\) and \(\psi\) as
\[\tfrac{\mathrm{d}r}{\mathrm{d}t} =-\gamma r-\frac{r\varepsilon}{2}(r^{2}-1)(1-q\cos(2\psi)), \tag{15a}\] \[\tfrac{\mathrm{d}\psi}{\mathrm{d}t} =\omega_{0}+\frac{\varepsilon q}{2}(r^{2}+1)\sin(2\psi)). \tag{15b}\]
The above equations govern the reduced low-dimensional order parameter dynamics, which actually corresponds to the dynamics of the original discrete set of equations (1) in the limit \(N\to\infty\) for the unimodal Lorentzian distribution function \(g(\omega)\) (3). Now, we discuss the various asymptotic macroscopic dynamical states admitted by Eq. (15).
#### 3.1.1 Incoherent (IC) state:
The incoherent (IC) state is characterized by time independent \(z\) satisfying \(z=z^{\star}=0\) (thus representing a stationary state of the dynamics (15)); correspondingly, one has \(r=0\). The linear stability of such a state is determined by
linearizing Eq. (14) around \(z=0\). By representing \(z=u\) with \(\mathsf{u}\ll\)1, we obtain
\[\frac{\partial u}{\partial t}+(\gamma-i\omega_{0})u-\frac{\varepsilon}{2}\big{[} (u)-q(u^{\star})\big{]}=0. \tag{16}\]
Decomposing \(u=u_{x}+iu_{y}\) yields
\[\frac{\partial}{\partial t}\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}=M\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}; \tag{17}\]
\[M\equiv\begin{bmatrix}-\gamma+\frac{\varepsilon}{2}\big{[}1-q\big{]}&- \omega_{0}\\ \omega_{0}&-\gamma+\frac{\varepsilon}{2}\big{[}1+q\big{]}\end{bmatrix}.\]
The matrix \(M\) has the characteristic eigenvalues
\[\lambda_{1,2}=\frac{-2\gamma+\varepsilon\pm\sqrt{\Delta}}{2}, \tag{18}\]
with \(\Delta=(\varepsilon^{2}q^{2}-4\omega_{0}^{2})\). Note that we have \(\lambda_{1}>\lambda_{2}\). The stability threshold for the incoherent state is then obtained by analysing \(\lambda_{1}\) as a function of \(\varepsilon\) and \(q\), and seeking the particular value of \(\varepsilon\) at which \(\lambda_{1}\) vanishes for a given \(q\). The stability threshold can be obtained as
\[\varepsilon_{HB}=2\gamma, \text{for}\;\;\Delta\leq 0, \tag{19}\] \[\varepsilon_{PF}=2\sqrt{\frac{\gamma^{2}+\omega_{0}^{2}}{1+q^{2}}} \text{for}\;\;\Delta>0. \tag{20}\]
#### 3.1.2 Synchronized stationary state (SSS):
Now, we explore the possibility of existence of the synchronized stationary state. Requiring that \(r\) and \(\psi\) have time-independent non-zero values in this case and hence equating the left hand side of equations (15) to zero, we obtain the two coupled equations for the synchronized stationary state as
\[\tfrac{\varepsilon q}{2}\cos(2\psi) = \frac{\gamma}{(r^{2}-1)}+\frac{\varepsilon}{2}, \tag{21a}\] \[\tfrac{\varepsilon q}{2}\sin(2\psi) = -\frac{\omega_{0}}{(r^{2}+1)}. \tag{21b}\]
With some algebra, one can obtained the following expressions for the stationary \(r\) and \(\psi\):
\[\tfrac{\varepsilon^{2}q^{2}}{4} = \left(\frac{\gamma}{(r^{2}-1)}+\frac{\varepsilon}{2}\right)^{2}+ \left(\frac{\omega_{0}}{(r^{2}+1)}\right)^{2}\!\!, \tag{22a}\] \[\tan(2\psi) = \frac{(1-r^{2})(\omega_{0})}{(r^{2}+1)(\gamma+\frac{\varepsilon} {2}(r^{2}-1))}. \tag{22b}\]
\(r\) and \(\psi\) can be calculated for a fixed set of parameters by numerically solving the above set of equations, which is then substituted back into the evolution equations for the low-dimensional order parameters to deduce the characteristic equation. The eigenvalues of the characteristic equation is then used to determine the saddle-node bifurcation curve in the suitable two parameter phase.
### Bimodal Distribution
Now, we will deduce the low-dimensional evolution equations corresponding to the macroscopic order parameters for the original discrete set of equations (1) in the limit \(N\rightarrow\infty\) for the asymmetric bimodal Lorentzian distribution function \(g(\omega)\) (4). Expanding the latter using partial fractions and evaluating the integral in Eq. (12) using appropriate contour integral, one can obtained the complex order parameter as
\[z(t)=\frac{1}{2}[z_{1}(t)+z_{2}(t)], \tag{23}\]
where
\[z_{1,2}(t)=\alpha^{\star}(\pm\omega_{0}-i\gamma_{1,2},t). \tag{24}\]
Substitution it into Eq. (11) yields two coupled complex ordinary differential equations describing the evolution of two suborder parameters as
\[\dot{z}_{1}= -(\gamma_{1}+i\omega_{0})z_{1}+\frac{\varepsilon}{4}[(z_{1}+z_{2 }-(z_{1}^{\star}+z_{2}^{\star})z_{1}^{2})\] \[+q((z_{1}+z_{2})z_{1}^{2}-(z_{1}^{\star}+z_{2}^{\star}))], \tag{25}\] \[\dot{z}_{2}= -(\gamma_{2}-i\omega_{0})z_{2}+\frac{\varepsilon}{4}[(z_{1}+z_{2 }-(z_{1}^{\star}+z_{2}^{\star})z_{2}^{2})\] \[+q((z_{1}+z_{2})z_{2}^{2}-(z_{1}^{\star}+z_{2}^{\star}))]. \tag{26}\]
The above evolution equations for the complex order parameters \(z(t)_{1,2}=r(t)_{1,2}e^{i\psi(t)_{1,2}}\) can be expressed in terms of the evolution equations in \(r_{1,2}\) and \(\psi_{1,2}\), as
\[\frac{\mathrm{d}r_{1}}{\mathrm{d}t} =-\gamma_{1}r_{1}+\frac{\varepsilon}{4}\big{[}(1-r_{1}^{2})(r_{1} +r_{2}\cos(\psi_{2}-\psi_{1}))\] \[+q((r_{1}^{2}-1)(r_{1}\cos(2\psi_{1})+r_{2}\cos(\psi_{2}+\psi_{1}) ))\big{]}, \tag{27a}\] \[\frac{\mathrm{d}\psi_{1}}{\mathrm{d}t} =-\omega_{0}+\frac{\varepsilon}{4r_{1}}(r_{1}^{2}+1)\big{[}r_{2} \sin(\psi_{2}-\psi_{1})\] \[+q(r_{1}\sin(2\psi_{1})+r_{2}\sin(\psi_{2}+\psi_{1}))\big{]}. \tag{27b}\]
and
\[\frac{\mathrm{d}r_{2}}{\mathrm{d}t}=-\gamma_{2}r_{2}+\frac{\varepsilon}{4} \big{[}(1-r_{2}^{2})(r_{1}\cos(\psi_{2}-\psi_{1})+r_{2})\]
\[+q((r_{2}^{2}-1)(r_{1}\cos(\psi_{2}+\psi_{1})+r_{2}\cos(2\psi_{2}))) \big{]}, \tag{28a}\] \[\frac{d\psi_{2}}{dt} =\omega_{0}-\frac{\varepsilon}{4r_{2}}(r_{2}^{2}+1)\big{[}r_{1} \sin(\psi_{2}-\psi_{1})\] \[-q(r_{1}\sin(\psi_{2}+\psi_{1})+r_{2}\sin(2\psi_{2}))\big{]}. \tag{28b}\]
The above equations constitute the evolution equations for reduced low-dimensional order parameters corresponding to the dynamics (1) in the limit \(N\rightarrow\infty\) and for the case of the asymmetric bimodal Lorentzian distribution \(g(\omega)\) (4). Now, we discuss the various asymptotic macroscopic dynamical states admitted by Eqs. (27) and (28).
Figure 1: Phase diagrams in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for the generalized Kuramoto model (1) with unimodal frequency distribution for different values of the symmetry breaking parameter \(q\). (a) \(q=0.0\), (b) \(q=0.1\), (c) \(q=0.5\), and (d) \(q=1.0\). The line connected by filled squares is the Hopf bifurcation curve \(\varepsilon_{HB}\) (Eq. (19)), solid line corresponds to the pitchfork bifurcation curve \(\varepsilon_{PF}\) (Eq. (20)) dashed line corresponds to the saddle-node bifurcation curve (Eq. (22)), and the dashed dotted line correspond to the homoclinic bifurcation curve obtained using the software XPPAUT. Bistability between the standing wave (SW) state and the synchronized stationary (SS) state is represented by dark shaded region enclosed by the saddle-node bifurcation curve and the homoclinic bifurcation curve. Bistability between the incoherent (IC) and the SS state is represented by light grey shaded region enclosed by the saddle-node bifurcation curve and the pitchfork bifurcation curve.
#### 3.2.1 Incoherent state
The incoherent state is defined by \(r_{1}\)=\(r_{2}\)=0. A linear stability analysis of the fixed point \((z_{1},z_{2})\) = (0, 0) results in the stability condition,
\[\omega_{0}^{2}=\frac{1}{4}(\varepsilon a_{1}-2a_{2}+\sqrt{\varepsilon^{2}q^{2}a _{1}-4\varepsilon a_{3}^{2}+4a_{3}^{2}a_{1}}), \tag{29}\]
where, \(a_{1}=\gamma_{1}+\gamma_{2},a_{2}=\gamma_{1}^{2}+\gamma_{2}^{2}\) and \(a_{3}=\gamma_{1}-\gamma_{2}\). This stability curve actually corresponds to the pitchfork bifurcation curve across which the fixed point \((z_{1},z_{2})\) = (0, 0) (incoherent state) loses its stability leading to the synchronized stationary state. Note that the incoherent state loses it stability through the Hopf bifurcation, which results in the stability condition
\[\omega_{0}^{2}= \frac{1}{4}(\varepsilon-2b_{1})^{4}(\varepsilon^{2}(q^{2}-1)-16b_ {2}+4\varepsilon b_{1})^{2}\bigg{[}\varepsilon^{5}(q-1)b_{1}-\varepsilon^{4}(q ^{2}-1)\big{(}(q^{2}-8)b_{3}\] \[+2b_{2}(q^{2}-10)\big{)}-4\varepsilon^{3}(q^{2}-2)\big{(}3(\gamma _{1}^{3}+\gamma_{2}^{3})+13b_{2}b_{1}\big{)}+4\varepsilon^{2}(b_{1})^{2} \big{(}b_{3}(q^{2}-8)\] \[+2b_{2}(3q^{2}-20)\big{)}+16\varepsilon b_{1}^{3}(b_{3}+10b_{2}) -64b_{2}b_{1}^{4}\bigg{]}, \tag{30}\]
Figure 2: Time averaged order parameter \(R\) and the Shinomoto-Kuramoto order parameter \(\xi\) for the generalized Kuramoto model (1) with unimodal frequency distribution as a function of \(\varepsilon/\gamma\) for \(\omega_{0}/\gamma\) = 1. (a) and (d) \(q=0.0\), (b) and (e) \(q=0.5\), and (c) and (f) \(q=1.0\). The forward trace is indicated by the line connected by open circles, while the reverse trace is indicated by the line connected by closed circles. The states indicated by IC, SW and SS correspond to the incoherent, standing wave, and synchronized stationary states, respectively. The bifurcation curves \(\varepsilon_{HB},\varepsilon_{Hc},\varepsilon_{PF}\) and \(\varepsilon_{SN}\) correspond to the Hopf, homoclinic, pitchfork and saddle-node bifurcation curves, respectively.
where, \(b_{1}=\gamma_{1}+\gamma_{2},b_{2}=\gamma_{1}\gamma_{2}\) and \(b_{3}=\gamma_{1}^{2}+\gamma_{2}^{2}\). The above stability curve corresponds to the Hopf bifurcation curve. The boundary of stable incoherent state is therefore enclosed by both the pitchfork bifurcation and Hopf bifurcation curves.
#### 3.2.2 Synchronized stationary state
Deducing the solution for the synchronized stationary state for the asymmetry bimodal distribution may not be possible as \(r_{1}~{}\neq~{}r_{2}\) and \(\psi_{1}~{}\neq~{}\psi_{2}\). However, for the symmetry bimodal distribution characterized by \(r_{1}~{}=~{}r_{2}\) and \(\psi_{1}~{}=~{}-\psi_{2}\), one can deduce the equations for \(r\) and \(\psi\) as in (22) and obtain the saddle-node bifurcation curves as pointed out in Sec. 3.1.2.
## 4 Numerical Results
In this section, we will proceed to unravel the macroscopic dynamical states admitted by the generalized Kuramoto model (1) with explicit symmetry breaking coupling by constructing appropriate two parameter phase diagrams and classifying the underlying dynamical states from a numerical analysis of
Figure 3: Phase diagrams in the \(q-\varepsilon/\gamma\) parameter space for the generalized Kuramoto model (1) with unimodal frequency distribution for increasing degree of heterogeneity of the frequency distribution. (a) \(\omega_{0}/\gamma_{2}=0.4\), (b) \(\omega_{0}/\gamma_{2}=0.6\), (c) \(\omega_{0}/\gamma_{2}=1.0\), and (a) \(\omega_{0}/\gamma_{2}=1.2\). The bifurcation curves and dynamical states are similar to those in Fig. 1.
the governing equations of the original discrete model. Specifically, we will unravel the rich phase diagrams of the generalized Kuramoto model, using both unimodal and bimodal frequency distributions, for distinct values of the symmetry breaking parameter \(q\). The number of oscillators is fixed as N = \(10^{4}\), and we use the standard 4th-order Runge-Kutta integration scheme with integration step size h = 0.01 to solve the generalized Kuramoto model (1). Note that one can break the two-parameter phase into several segments and multiple copies of the same code can be simulated simultaneously for different values of the parameters to generate the data, which can then be concatenated to get the complete phase diagrams with a reasonable workstation.
The initial state of the oscillators (\(\theta_{i}\)'s ) is distributed with uniform random values between -\(\pi\) and \(+\pi\). We use the time averaged order parameter \(R\) defined as
\[R=\lim_{t\rightarrow\infty}\frac{1}{\tau}\int_{t}^{t+\tau}r(t)dt, \tag{31}\]
where \(r(t)=|Z|=|N^{-1}\sum_{j=1}^{N}e^{i\theta_{j}}|\). Incoherent state is characterized by \(R=r(t)=0\), while the synchronized stationary state is characterized by \(R=r(t)=const\). Standing wave is characterized by the oscillating nature of \(r(t)\). In order
Figure 4: Phase diagrams in the \(\omega_{0}/\gamma-\varepsilon/\gamma\) parameter space for the generalized Kuramoto model (1) with symmetric bimodal frequency distribution for increasing values of the strength of the symmetry breaking coupling. (a) \(q=0.0\), (b) \(q=0.5\), (c) \(q=0.8\), and (d) \(q=1.0\). The bifurcation curves and dynamical states are similar to those in Fig. 1.
Figure 5: Phase diagrams in the \(\omega_{0}/\gamma_{2}-\varepsilon/\gamma_{2}\) parameter space for the generalized Kuramoto model (1) with asymmetric bimodal frequency distribution for increasing the strength of the symmetry breaking coupling and increasing the asymmetry between the bimodal frequency distributions. (a) and (b) \(\gamma_{1}/\gamma_{2}=0.6\), and (c) and (d) \(\gamma_{1}/\gamma_{2}=1.2\). (a) and (c) \(q=0.1\) and (b) and (d) \(q=1\). The bifurcation curves and dynamical states are similar to those in Fig. 1.
Figure 6: Phase diagrams in the \(q-\varepsilon/\gamma_{2}\) parameter space (first row) for \(\omega_{0}/\gamma_{2}=1\) and \(q-\omega_{0}/\gamma_{2}\) (second row) for \(\varepsilon/\gamma_{2}=2.5\) for the generalized Kuramoto model (1) with asymmetric bimodal frequency distribution. (a) and (c) \(\gamma_{1}/\gamma_{2}=0.6\), and (b) and (d) \(\gamma_{1}/\gamma_{2}=1.2\).
to distinguish the synchronized stationary state and the standing wave state more clearly, we use the Shinomoto-Kuramoto order parameter [41; 42]
\[\xi=\overline{|r(t)-R|}, \tag{32}\]
where \(\bar{z}\) denoted the long time average. Shinomoto-Kuramoto order parameter takes \(\xi=0\) for the incoherent and synchronized stationary states, whereas it takes nonzero value for the standing wave state.
#### 4.0.1 Phase diagrams for the unimodal distribution
We have depicted phase diagrams in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for different values of the symmetry breaking parameter \(q\) in Fig. 1 in order to understand the effect of the explicit symmetry breaking interaction on the dynamics of Eq. (1) with unimodal frequency distribution. The phase diagram is demarcated into different dynamical regions using the value of the time averaged order parameter \(R\) and the Shinomoto-Kuramoto order parameter \(\xi\). Incoherent state (IC), synchronized stationary state (SS) and standing wave (SW), along with the bistable regions (dark and light gray shaded regions) are observed in the phase diagram. The parameter space indicated by light gray shaded region corresponds to the bistable regime between the incoherent and the synchronized stationary states, while that indicated by dark gray shaded region corresponds to the bistable regime between the standing wave and the synchronized stationary states,
Only the incoherent and standing wave states are observed in the phase diagram for \(q=0\) (see 1(a)), a typical phase diagram of the Kuramoto model with unimodal frequency distribution. The line connected by the filled squares corresponds to the Hopf bifurcation curve, across which there is a transition from the incoherent state to the standing wave state. Note that a finite value of \(q\) results in the loss of the rotational symmetry of the dynamics of the Kuramoto oscillators. Even a feeble value of \(q\) manifests the synchronized stationary state in a rather large parameter space at the cost of the standing wave state (see Fig. 1(b) for q=0.1). There is a transition from the incoherent state to the standing wave state via the Hopf bifurcation curve \(\varepsilon_{HB}\) (indicated by the line connected by filled squares) as a function of \(\varepsilon/\gamma\) for \(\omega_{0}/\gamma>0.1\). The standing wave state loses its stability via the homoclinic bifurcation (indicated by the dashed-dotted line) as a function of \(\varepsilon/\gamma\) resulting in the synchronized stationary state. There is also a transition from the incoherent state to the synchronized stationary state for \(\omega_{0}/\gamma\leq 0.1\) as a function of \(\varepsilon/\gamma\) via the pitchfork bifurcation curve \(\varepsilon_{PF}\) indicated by the solid line.
Further larger values of the symmetry breaking parameter results in the emergence of the bistability between the standing wave and the synchronized stationary states (indicated by dark shaded region) enclosed by the saddle-node bifurcation curve (indicated by dashed line) and the homoclinic bifurcation curve (see Fig. 1(c) for q=0.5). There is also a bistable region between the incoherent state and the synchronized stationary state (indicated by light grey
shaded region) enclosed by the saddle-node bifurcation curve and the pitchfork bifurcation curve. For \(q=1\), both the bistable regions enlarged in the phase diagram (see Fig. 1(d)), which is a typical phase diagram of the Winfree model with the unimodal frequency distribution. The phase diagrams for \(q=0.5\) and \(1.0\) have similar dynamics except for the regime shift and enhanced bistabilities in a larger parameter space. Thus, as the value of \(q\) is increased from the null value to the unity, one can observe the transition from the phase diagram of the Kuramoto model to that of the Winfree model. Note that the Hopf, saddle-node and pitchfork bifurcation curves are the analytical bifurcation curves, Eqs. (19), (20) and (22) respectively, obtained from the low-dimensional evolution equations for the order parameters deduced in Sec. 3.1. Homoclinic bifurcation curve is obtained from the software XPPAUT [43].
Time averaged order parameter \(R\) and the Shinomoto-Kuramoto order parameter \(\xi\) are depicted in Fig. 2 as a function of \(\varepsilon/\gamma\) for different values of the symmetry breaking parameter \(q\) and \(\omega_{0}/\gamma\). The forward trace is indicated by the line connected by open circles, while the backward trace is indicated by the line connected by closed circles. There is a smooth (second order) transition from the incoherent to the standing wave states via the Hopf bifurcation \(\varepsilon_{HB}\) at \(\varepsilon/\gamma=2\) during both forward and reverse traces for \(q=0.0\) and \(\omega_{0}/\gamma=1\) as depicted in Figs. 2(a) and 2(d). In addition, to the smooth transition from the incoherent state to the standing wave state via the Hopf bifurcation \(\varepsilon_{HB}\) at \(\varepsilon/\gamma=2\), there is another smooth transition from the standing wave state to the synchronized stationary state via the homoclinic bifurcation \(\varepsilon_{Hc}\) at \(\varepsilon/\gamma=2.94\) in both the forward and reverse traces as shown in Fig. 2(b) for \(q=0.5\) and \(\omega_{0}/\gamma=1\). The transition from the standing wave state to the synchronized stationary state is also corroborated by the sharp fall of the Shinomoto-Kuramoto order parameter \(\xi\) to the null value (see Fig. 2(e)). In contrast, there is an abrupt (first order) transition from the incoherent state to the synchronized stationary state at \(\varepsilon/\gamma=2\) via the pitchfork bifurcation curve \(\varepsilon_{PF}\) for \(\omega_{0}/\gamma=1\) during the forward trace, whereas there is an abrupt transition from the synchronized stationary state to the incoherent state at \(\varepsilon/\gamma=1.8\) via the saddle-node bifurcation \(\varepsilon_{SN}\) during the reverse trace (see Fig. 2(c) for \(q=1.0\)) elucidating the presence of hysteresis and bistability between the incoherent state and the synchronized stationary state. The Shinomoto-Kuramoto order parameter \(\xi\) takes the null value, in the entire range of \(\varepsilon/\gamma\) in Fig. 2(f) for \(q=1.0\), characterizing both the incoherent and the synchronized stationary states.
The observed dynamical states and their transitions are depicted in the \((q,\varepsilon/\gamma)\) parameter space for different \(\omega_{0}/\gamma\) in Fig. 3. The bifurcations mediating the dynamical transitions are similar to those observed in Fig. 1. The phase diagram for \(\omega_{0}/\gamma=0.4\) is shown in Fig. 3(a). There is a transition from the incoherent state to the standing wave state via the Hopf bifurcation curve for smaller values of the symmetry breaking parameter as a function of \(\varepsilon/\gamma\). Larger values of the symmetry breaking parameter favor the synchronized stationary state in the entire range of \(\varepsilon/\gamma\). However, in a narrow range of \(q\in(0.36,0.46]\)
(see Fig. 3(a)), there is a transition from the incoherent state to the standing wave state and then to the synchronized stationary state. There is also a transition from the incoherent state to the synchronized stationary state in the range of \(q\in(0.46,0.6)\). Recall that \(\omega_{0}\) quantifies the degree of detuning of the frequency distribution. Increase in the heterogeneity of the frequency distribution promotes bistable regions, incoherent and standing wave states, to a large region of the \((q,\varepsilon/\gamma)\) parameter space. For instance, the phase diagram for \(\omega_{0}/\gamma=0.6\) is depicted in Fig. 3(b) elucidates the emergence of the bistable regions and enlarged regions of the incoherent and standing wave states as a function of \(q\), a manifestation of increased heterogeneity. Further increase in the \(\omega_{0}/\gamma\) enlarges the bistable regions, the incoherent and the standing wave states as depicted in Figs. 3(c) and 3(d) for \(\omega_{0}/\gamma=1\) and \(1.2\), respectively. These results are in agreement with the phase diagrams in Fig. 1 in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for increasing values of the symmetry breaking parameter. Next, we will explore the effect of symmetric and asymmetric bimodal frequency distributions on the phase diagrams in the following.
#### 4.0.2 Phase diagrams for bimodal distribution
In this section, we analyse the phase space dynamics of the generalized Kuramoto model (1) with symmetric bimodal frequency distribution (4) by setting \(\gamma=\gamma_{1}=\gamma_{2}\) for increasing values of the strength of the symmetry breaking coupling. We have depicted the phase diagrams in the \((\omega_{0}/\gamma,\varepsilon/\gamma)\) parameter space for different values of the symmetry breaking parameter \(q\) in Fig. 4. Note that the phase space dynamics of the Kuramoto model (see Fig. 4(a) for \(q=0\)) are similar to those of the Winfree model (see Fig. 4(d) for \(q=1\)) for the symmetric bimodal frequency distribution except for the regime shift. The dynamical states and the bifurcation curves are similar to those in Fig. 1. Increasing the strength of the symmetry breaking coupling favors the synchronized stationary state and the bistable states in a large region of the parameter space as evident from Fig. 4(b) and 4(c) for \(q=0.5\) and \(q=0.8\), respectively. Note that a large heterogeneity in the frequency distribution favor the incoherent and the standing wave states in a rather large region of the phase diagram for smaller \(q\) and \(\varepsilon\) (see Fig. 4(a) for \(q=0\)). Nevertheless, the synchronized stationary state predominates the phase diagram for larger strength of the symmetry breaking coupling and \(\varepsilon\) despite the presence of a large heterogeneity in the frequency distribution (see Fig. 4(d) for \(q=1\)).
Next, we analyze the phase space dynamics of the generalized Kuramoto model (1) with asymmetric bimodal frequency distribution (4) by increasing the strength of the symmetry breaking coupling and the degree of asymmetry between the bimodal frequency distributions. We have depicted the phase diagrams in the \((\omega_{0}/\gamma_{2},\varepsilon/\gamma_{2})\) parameter space for different values of the symmetry breaking parameter \(q\) in Fig. 5. Again, the dynamical states and the bifurcation curves are similar to those in Fig. 1. Phase diagram for \(q=0.1\) and \(\gamma_{1}/\gamma_{2}=0.6\) is depicted in Fig. 5(a). For most values of \(\omega_{0}/\gamma_{2}\), there is a transition from the incoherent state to the synchronized stationary state via the
standing wave state and there is no bistability for \(\gamma_{1}<\gamma_{2}\). However, there is a transition from the incoherent state to the synchronized stationary state in a large range of \(\omega_{0}/\gamma\in(0,1)\) and the emergence of bistable states for \(\gamma_{1}>\gamma_{2}\) as depicted in Fig. 5(b) for \(\gamma_{1}/\gamma_{2}=1.2\). It is evident that bistable states emerge even for low values of the symmetry breaking coupling when \(\gamma_{1}>\gamma_{2}\). Note that bistable states emerge even for \(\gamma_{1}<\gamma_{2}\) but for a large strength of the symmetry breaking coupling (see Fig. 5(c) for \(q=1\) and \(\gamma_{1}/\gamma_{2}=0.6\)). The spread of the bistable states increases for \(q=1\) and \(\gamma_{1}/\gamma_{2}=1.2\) as illustrated in Fig. 5(d). Thus, larger \(\gamma_{1}/\gamma_{2}\) and \(q\) favor the emergence of the bistable states.
Phase diagrams in the \((q,\varepsilon/\gamma_{2})\) parameter space is depicted in Figs. 6(a) and 6(b) for \(\gamma_{1}/\gamma_{2}=0.6\) and \(1.2\), respectively, and for \(\omega_{0}/\gamma_{2}=1\). The dynamical states and the bifurcation curves are similar to those in Fig. 1. There is a transition from the incoherent state to the synchronized stationary state via the standing wave state for small values of \(q\) (see Fig.. 6(a)) similar to that in Fig. 5(a). However, for larger values of \(q\) multistability between the standing wave and the synchronized stationary state emerges (dark shaded region in the inset) in addition to the above dynamical transition. For \(\gamma_{1}>\gamma_{2}\), there a transition from the incoherent state to the standing wave state along with the bistability among them in a rather narrow range of \(q\in(0,0.4)\) as a function of \(\varepsilon/\gamma_{2}\) as shown in inset of Fig. 6(b). For \(q>0.4\), there is a transition from the incoherent state to the synchronized stationary state with the onset of bistability (light grey shaded region) between them. Phase diagrams in the \((q,\omega_{0}/\gamma_{2})\) parameter space is depicted in Figs. 6(c) and 6(d) for \(\gamma_{1}/\gamma_{2}=0.6\) and \(1.2\), respectively, for \(\varepsilon/\gamma_{2}=2.5\). There is a transition from the synchronized stationary state to the standing wave state as a function of \(\omega_{0}/\gamma_{2}\) for \(\gamma_{1}<\gamma_{2}\) (see Fig. 6(c)) via the homoclinic bifurcation curve. Both the bistable states emerge when \(\gamma_{1}>\gamma_{2}\) as shown in Fig. 6(c) for \(\gamma_{1}=1.2\).
## 5 Conclusions
We have considered a nontrivial generalization of the paradigmatic Kuramoto model by using an additional coupling term that explicitly breaks the rotational symmetry of the Kuramoto model. The strength of the symmetry breaking coupling is found to play a key role in the manifestation of the dynamical states and their transitions along with the onset of bistability among the observed dynamical states in the phase diagram. A typical phase diagram of the Kuramoto model is transformed into a typical phase diagram of the Winfree mode for the unit value of the strength of the symmetry breaking coupling thereby bridging the dynamics of both the Kuramoto and Winfree models. Large values of the strength of the symmetry breaking coupling favor the manifestation of bistable regions and synchronized stationary state in a large region of the phase diagram. The dynamical transitions in the bistable region are characterized by an abrupt (first-order) transition in both the forward and reverse traces. Phase diagrams of both the Kuramoto
and Winfree models resemble each other for symmetric bimodal frequency distribution except for the regime shifts and the degree of the spread of the dynamical states and bistable regions. Nevertheless, for asymmetric bimodal frequency distribution one cannot observe the bistable states for low values of the strength of the symmetry breaking coupling when \(\gamma_{1}<\gamma_{2}\). In contrast, bistable states emerge even for \(\gamma_{1}<\gamma_{2}\) for a large strength of the symmetry breaking coupling. Larger \(\gamma_{1}/\gamma_{2}\) and larger \(q\) favors the emergence of the bistable states in the case of the asymmetric bimodal frequency distribution. A large \(\omega_{0}\) and consequently a large degree of heterogeneity facilitates the spread of the incoherent and standing wave states in the phase diagram for a low strength of the symmetry breaking coupling. However, a large \(q\) promotes the spread of the synchronized stationary state and bistable regions in the phase diagram despite the degree of heterogeneity in the frequency distribution. We have deduced the low-dimensional evolution equations for the complex order parameters using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We have also deduced the Hopf, pitchfork, saddle-node bifurcation curves from the low-dimensional evolution equations for the complex order parameters. Homoclinic bifurcation curve is obtained from XPPAUT software. Simulation results, obtained from the original discrete set of equations agrees well with the analytical bifurcation curves. We sincerely believe that our results will shed more light and enhance our current understanding of the effects of symmetry breaking coupling in the phase models and bridges the dynamics of two distinctly different phase models, which are far from reach otherwise.
## 6 Acknowledgements
The work of V.K.C. is supported by the DST-CRG Project under Grant No. CRG/2020/004353 and DST, New Delhi for computational facilities under the DST-FIST program (SR/FST/PS- 1/2020/135)to the Department of Physics. M.M. thanks the Department of Science and Technology, Government of India, for provid- ing financial support through an INSPIRE Fellowship No. DST/INSPIRE Fellowship/2019/IF190871. S.G. acknowledges support from the Science and Engineering Research Board (SERB), India under SERB-TARE scheme Grant No. TAR/2018/000023 and SERB-MATRICS scheme Grant No. MTR/2019/000560. He also thanks ICTP - The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy for support under its Regular Associateship scheme. DVS is supported by the DST-SERB-CRG Project under Grant No. CRG/2021/000816.
**Data Availability Statement**: No Data associated in the manuscript. The data sets on the current study are available from the corresponding author on reasonable request. |
2309.00046 | A formation mechanism for "Wrong Way" Radio Relics | Radio Relics are typically found to be arc-like regions of synchrotron
emission in the outskirts of merging galaxy clusters, bowing out from the
cluster center. In most cases they show synchrotron spectra that steepen
towards the cluster center, indicating that they are caused by relativistic
electrons being accelerated at outwards traveling merger shocks. A number of
radio relics break with this ideal picture and show morphologies that are bent
the opposite way and show spectral index distributions which do not follow
expectations from the ideal picture. We propose that these `Wrong Way' Relics
can form when an outwards travelling shock wave is bent inwards by an
in-falling galaxy cluster or group. We test this in an ultra-high resolution
zoom-in simulation of a massive galaxy cluster with an on-the-fly spectral
Cosmic Ray model. This allows us to study not only the synchrotron emission at
colliding shocks, but also their synchrotron spectra to adress the open
question of relics with strongly varying spectral indices over the relic
surface. | Ludwig M. Böss, Ulrich P. Steinwandel, Klaus Dolag | 2023-08-31T18:00:01Z | http://arxiv.org/abs/2309.00046v2 | # A formation mechanism for 'Wrong Way' Radio Relics
###### Abstract
Radio Relics are typically found to be arc-like regions of synchrotron emission in the outskirts of merging galaxy clusters, bowing out from the cluster center. In most cases they show synchrotron spectra that steepen towards the cluster center, indicating that they are caused by relativistic electrons being accelerated at outwards traveling merger shocks. A number of radio relics break with this ideal picture and show morphologies that are bent the opposite way and show spectral index distributions which do not follow expectations from the ideal picture. We propose that these 'Wrong Way' Relics can form when an outwards travelling shock wave is bent inwards by an in-falling galaxy cluster or group. We test this in an ultra-high resolution zoom-in simulation of a massive galaxy cluster with an on-the-fly spectral Cosmic Ray model. This allows us to study not only the synchrotron emission at colliding shocks, but also their synchrotron spectra to address the open question of relics with strongly varying spectral indices over the relic surface.
Extragalactic radio sources - Cosmic rays - Galaxy clusters +
Footnote †: journal: ApJL
0000-0002-8870-788X]Ludwig M. Boss
0000-0002-4880-788X]Ulrich P. Steinwandel
0000-0002-4880-788X]Klaus Dolag
## 1 Introduction
Radio Relics are roughly Mpc-size regions of radio emission in galaxy clusters, typically with an arc-like morphology, which shows strong polarisation (typically \(\sim 30\%\) at 1.4 GHz, e.g. Rajpurohit et al., 2022), steep integrated radio spectra (\(L_{\nu}\propto\nu^{\alpha}\), where \(\alpha\sim-1.1\)) and a steepening of these spectra towards the cluster center (see van Weeren et al., 2019, for a recent review). They are typically associated with ongoing mergers between massive galaxy clusters (see e.g. Ensslin et al., 1998; Roettiger et al., 1999; Ensslin & Bruggen, 2002; Bruggen et al., 2012; Brunetti and Jones, 2014). These mergers dissipate a large fraction of their potential energy in the form of shocks which heat the intra-cluster medium (ICM) to \(\sim 10^{8}\) K. This can be observed as thermal X-ray emission of the fully ionized plasma (e.g. Bohringer and Werner, 2010, for a review). A smaller part of the shock energy is dissipated into the acceleration of Cosmic Ray (CR) electrons and protons in a process called "diffusive shock acceleration" (DSA, see e.g., Bell, 1978, 1983; Blandford and Ostriker, 1978; Drury, 1983, the latter for a review). In this process (supra-)thermal particles cross a shock front and are scattered by MHD turbulence from the downstream of the shock back into the upstream. They gain energy at every crossing until their gyro-radii are large enough to escape from the acceleration region or they are advected away in the downstream of the shock. Hybrid and PIC plasma simulations of shock fronts show that this process can efficiently accelerate protons in low-\(\beta\) supernova shocks (e.g. Caprioli and Spitkovsky, 2014; Caprioli et al., 2018; Caprioli et al., 2020; Pohl et al., 2020, the latter for a review) and high-\(\beta\) structure formation shocks (e.g., Ryu et al., 2019; Ha et al., 2023). For electrons it is found that this process is harder to trigger, as their gyro-radii are smaller at equivalent magnetic field strength and with that it is more difficult to start a cyclical DSA process and with that
efficient acceleration of thermal electrons to the GeV energies expected from synchrotron emission by radio relics. They require an efficient pre-acceleration process such as (stochastic) shock-drift acceleration (SDA), or a seed population stabilized against cooling, to efficiently part-take in a DSA process (see e.g., Guo et al., 2014; Park et al., 2015; Kang et al., 2019; Tran and Sironi, 2020; Kobzar et al., 2021; Amano and Hoshino, 2022; Tran et al., 2023). On top of that the acceleration efficiency is found to be dependent on the shock obliquity, the angle between shock propagation and magnetic field vector. Typically it is found that protons are more efficiently accelerated at quasi-parallel shocks (see e.g., Kang and Ryu, 2013; Caprioli and Spitkovsky, 2014; Ryu et al., 2019), while electrons are more efficiently accelerated at quasi-perpendicular shocks (e.g., Guo et al., 2014; Kang et al., 2019; Ha et al., 2021; Amano and Hoshino, 2022). The results from small-scale plasma simulations have been adopted in cosmological simulations to model emission originating from structure formation shocks (see e.g., Hoeft et al., 2008; Pfrommer, 2008; Pfrommer et al., 2007, 2008, 2017; Skillman et al., 2013; Vazza et al., 2012, 2016; Wittor et al., 2017; Banfi et al., 2020; Wittor, 2021; Ha et al., 2023). However, the efficiencies found in plasma simulations are not sufficient to explain the high synchrotron brightness of radio relics (see Botteon et al., 2020, for a recent discussion).
Recent observations of radio relics show not only bright arc-like structures, but also more complex morphologies such as S-shapes (e.g., de Gasperin et al., 2022), flat relics with varying thickness (e.g., van Weeren et al., 2016; Rajpurohit et al., 2020, 2020) and filamentary stuctures (e.g. Trasatti et al., 2015; Rajpurohit et al., 2022; Chibueze et al., 2023). There is also a small set of radio relics that show a curvature which points in the "wrong" direction. Instead of the typical outward-bent (convex) shape of the relic, away from the cluster center, they show an inward-bent (concave) morphology. Examples of these relics can be found in the _Ant Cluster_ (PSZ2 G145.92-12.53) (Botteon et al., 2021), PSZ2 G186.99+38.65 (Botteon et al., 2022), _Source D1_ in Abell 3266 (Riseley et al., 2022), SPT-CL J2023-5535 (HyeongHan et al., 2020), Abell 168 (Dwarakanath et al., 2018) and the southern relic in Ciza J2242.8+5301 (e.g., van Weeren et al., 2010; Stroe et al., 2013, 2016; Hoang et al., 2017; Di Gennaro et al., 2018). They can show steep synchrotron spectra which would indicate Mach numbers of the underlying shocks that are in disagreement with the critical Mach numbers found to be required to efficiently accelerate CR electrons (e.g., Kang et al., 2019) and some of their spectra are better fit by broken power-laws rather than a single one hinting towards overlapping (re-)acceleration processes as shown in Riseley et al. (2022) and inititally also in Owen et al. (2014); Trasatti et al. (2015); Parekh et al. (2022). However, follow-up observations indicate that there is no spectral break in the majority of these relics after all (Benson et al., 2017; Rajpurohit et al., 2022), making the source _D1_ the only relic so far that shows this peculiar spectral behaviour. The southern relic of Ciza J2242.8+5301 shows additional strong variations of the synchrotron slope, which makes it hard to explain in the context of DSA at a single shock front (see discussion in Di Gennaro et al., 2018). Cosmological simulations of galaxy clusters show that mergers between clusters are not isolated events and that merger shocks can deform as they expand into highly complex and turbulent ICM (e.g. Hoeft et al., 2008; Skillman et al., 2013; Wittor et al., 2017; Nuza et al., 2017). In this work we propose that a possible formation mechanism for these 'Wrong Way' relics (as they are referred to in Riseley et al., 2022) is the collision of an outwards travelling shock front with an in-falling substructure. We investigate this scenario in the sibling simulation of an ultra-high resolution MHD simulation of a \(M_{\rm vir}\approx 1.3\times 10^{15}M_{\odot}\) galaxy cluster introduced in Steinwandel et al. (2023), where we attached a population of CR protons and electrons to every resolution element of our simulation. This effectively turns every particle into a tracer particle for CRs, while also accounting for feedback by the CR component on the thermal gas. We resolve these populations with a spectral resolution of 12 bins for protons, and 48 bins for electrons over a range of 6 orders of magnitude in momentum. The distribution function of the CRs is updated on-the-fly at every timestep of the simulation according to the method presented in Boss et al. (2023). This allows us to study CR electron injection at colliding shocks and the subsequent cooling of the relativistic electron population. To the best of our knowledge this simulation is the first of its kind. This work is structured as follows: In Sec. 2 we describe the simulation code, CR model and initial conditions used in this work. In Sec. 3 we study the 'Wrong Way' Relic (WWR) found in the simulation and its origin. Sec. 4 contains a discussion of our findings and a comparison to observed systems. Finally Sec. 5 contains our conclusion and outlook to future work.
## 2 Methods
The simulation used in this work was carried out with the Tree-SPMHD code OpenGadget3. OpenGadget3 is a derivative of Gadget2(Springel, 2005) with improvements to the hydro and gravity solvers as well as additional physics modules. The SPH solver is updated as described in Beck et al. (2016) to in
clude higher order kernels and their bias correction (see Dehnen and Aly, 2012) and artificial viscosity as well as physical conduction to improve the mixing behavior and shock capturing of SPH (e.g. Price, 2012; Hopkins, 2013; Hu et al., 2014). Magnetohydrodynamics (MHD) have been implementation by Dolag and Stasyszyn (2009) with updates to include non-ideal MHD in the form of constant (physical) diffusion and dissipation presented in Bonafede et al. (2011). Conduction is modelled via a conjugate gradient solver (Petkova and Springel, 2009; Arth et al., 2014; Steinwandel et al., 2020), with a suppression factor of the Spitzer value for conduction of 5 per cent. We adopt a Wendland C4 kernel (Wendland, 1995, 2004) with 200 neighbors and bias correction as suggested by Dehnen and Aly (2012).
We employ the on-the-fly spectral CR model Crescendo introduced in Boss et al. (2023) to model the time evolution of CR protons and electrons in every resolution element of our simulation. The time evolution of distributions of CRs in the absence of CR transport, diffusion in momentum space and catastrophic losses can be described by
\[\frac{Df(p,\mathbf{x},t)}{Dt} =\left(\frac{1}{3}\nabla\cdot\mathbf{u}\right)p\frac{\partial f( p,\mathbf{x},t)}{\partial p} \tag{1}\] \[+\frac{1}{p^{2}}\frac{\partial}{\partial p}\left(p^{2}\sum_{l}b_ {l}f(p,\mathbf{x},t)\right)\] (2) \[+j(\mathbf{x},p,t), \tag{3}\]
where we used \(\frac{Df}{Dt}=\frac{\partial f}{\partial t}+\mathbf{u}\cdot\nabla f\) due to OpenGadget3 being a Lagrangian code. The right side of Eq. 1 describes changes due to adiabatic compression or expansion of the gas the CRs are confined in, Eq. 2 describes energy losses and Eq. 3 is the source term. We represent \(f(p,\mathbf{x},t)\) as piece-wise powerlaws in momentum space with 2 bins/dex for protons and 8 bins/dex for electrons in the dimensionless momentum range \(\hat{p}\equiv\frac{p_{i}}{m_{i}c}\in[0.1,10^{5}]\), where \(p_{i}\) and \(m_{i}\) refer to the momentum and mass for protons and electrons, respectively. The distribution function is updated at every timestep following the two-moment approach as introduced in Miniati (2001) by computing CR number and energy changes per bin. Adiabatic changes are accounted for at every timestep via the density change within a SPH particle. We model energy losses of electrons due to synchrotron emission and inverse Compton scattering off CMB photons. As a source term we employ the DSA parametrisation by Kang and Ryu (2013) for the dependency on sonic Mach number (\(\eta(\mathcal{M}_{s})\)), which allows for DSA at shocks beyond a critical Mach number \(\mathcal{M}_{s}>2\) and saturates at a maximum efficiency of \(\eta_{\rm max}\approx 0.2\). In addition to that we use the model by Pais et al. (2018) for the dependency of CR acceleration efficiency on shock obliquity (\(\eta(\theta_{B})\)). Ultimately we divert a fraction
\[\eta_{\rm pot}=\eta(\mathcal{M}_{s})\times\eta(\theta_{B}) \tag{4}\]
of the entropy change over the shock into the CR component. We detect the shock properties on-the-fly in the simulation with the shock finder introduced by Beck et al. (2016) with improvements to compute the shock obliquity as the angle between the pressure gradient within the kernel (which we treat as the shock normal \(\hat{\mathbf{n}}\)) and the magnetic field vector upstream of the shock \(\mathbf{B}_{u}\). The slope of the injected CR spectrum follows linear DSA theory and we use a fixed electron to proton injection ratio of \(K_{e/p}=0.01\). The CR component exerts feedback on the thermal gas by solving the pressure integral
\[P_{\rm CR,c}=\frac{4\pi\,c}{3}\;a^{4}\int\limits_{p_{\rm min}}^{p_{\rm cut}}dp \;p^{3}f(p) \tag{5}\]
between the minimum momentum \(p_{\rm min}\) represented by the CR population and the cutoff of the distribution function \(p_{\rm cut}\). We start the CR injection at \(z=4\) to avoid too strong time-constraints due to very efficient high-momentum energy losses of CR electrons.
Synchrotron emission is calculated directly from the evolved electron distribution function (see Appendix A for details).
We use a zoomed-in initial condition of a massive galaxy cluster with a virial mass of \(M_{\rm vir}\approx 1.3\times 10^{15}\;M_{\odot}\) from the sample presented in Bonafede et al. (2011). The cluster is up-sampled to 250x base resolution, which corresponds to a mass resolution of \(M_{\rm gas}\approx 8.7\times 10^{5}M_{\odot}\) and \(M_{\rm DM}\approx 4.7\times 10^{6}M_{\odot}\) for gas and dark matter particles, respectively. We reach a maximum resolution for a gas particle of \(h_{\rm sml,min}\approx 1\) kpc with a gravitational softening of \(\epsilon=0.48\,h^{-1}c\) kpc. The cluster was selected from a lower-resolution dark matter-only simulation of a Gpc volume, which is large enough to provide a large sample of systems above a few \(10^{15}M_{\odot}\). The parent simulation used a WMAP7 cosmology with \(\Omega_{0}=0.24\), \(\Omega_{\Lambda}=0.76\), \(\Omega_{\rm baryon}=0.04\), \(h=0.72\) and \(\sigma_{8}=0.8\), which we also adopt for the present simulation. We start the simulation at redshift \(z=310\) and seed a constant magnetic field in x-direction with \(B_{0}=\)\(10^{-14}\) G (see Steinwandel et al., 2021, for a study of the impact of the choice of \(B_{0}\)). The initial conditions of this cluster at this resolution have been used to study the interaction between internal- and accretion shocks in Zhang et al. (2020, 2020) and its magnetic field has been studied in Steinwandel et al. (2023).
## 3 Results
### Merger Geometry
The 'Wrong Way' relic in our simulation originates from a triple-merger at \(z\sim 0.35-0.2\). We show the schematic of the merger geometry in Fig. 1. A high-velocity merger with a 1:10 mass ratio between impactor (\(M_{2}\approx 10^{14}M_{\odot}\)) and target (\(M_{1}\approx 10^{15}M_{\odot}\)) with a large impact parameter of \(b\approx 500\) kpc drives two shock waves. These shocks follow the canonical picture (e.g. Fig. 7 in van Weeren et al., 2019) of the lighter merging partner (\(M_{2}\)) driving a strong bow-shock (\(S_{2}\) in our schematic), while the heavier merging partner (\(M_{1}\)) drives a weaker counter shock (\(S_{1}\)) in the in-fall direction of the lighter partner. This counter shock is subsequently impacted by a third merger partner (\(M_{3}\)), a group of galaxies with a total mass of \(M_{3}\approx 2\times 10^{13}M_{\odot}\), which ultimately passes through the shock surface and falls into the larger merger partner (\(M_{1}\)) in a low-impact parameter merger with \(b\approx 35\) kpc. The impact of the group deforms the weaker counter shock (\(S_{1}\)) first from a convex shape at \(z=0.32\) to a concave shape at \(z=0.29\) and subsequently to a v-like shape pointing towards the cluster center at \(z=0.27\), which also leads to a complex superposition of the different parts original shock surface with different mach numbers as well as differently aged cosmic ray electron population.
Due to our system being a single, isolated cluster we cannot make any predictions for the minimum critical mass of an in-falling sub-structure that is able to deform such a shock front, or the statistical frequency of such an event. We leave this question for future work with cosmological boxes, to allow for a statistical analysis.
### The Simulated 'Wrong Way' Radio Relic
Fig. 2 from top to bottom shows the time evolution of the counter shock \(S_{1}\) in the \(xz\)-plane of the simulation and its phasing through morphologies matching various 'Wrong Way' relics. The bottom row shows the same relic as the row above in the \(yz\)-plane. From left to right we show the X-ray surface brightness, CR electron energy contained in the part of the potentially synchrotron bright population with \(E>1\) GeV, the synchrotron surface brightness at 1.4 GHz and the slope of the synchrotron spectrum obtained by fitting a powerlaw to the surface brightness at 144 MHz and 1.4 GHz. These images are obtained by mapping the SPH data to a 2D grid following the algorithm described in Dolag et al. (2005) with a pixel-size of \(\Delta_{\rm pix}\approx 1\) kpc. This corresponds to a resolution of \(\theta_{\rm pix}\approx 0.24\)" at \(z=0.27\) and with that is significantly below current observational limits. Accompanying to that we show the distribution of sonic Mach number \(\mathcal{M}_{s}\) of the different panels of Fig. 2 in Fig. 3 and the synchrotron spectra in Fig. 4. In Fig. 5 we show the historgrams of pixels in Fig. 2 as a function of synchrotron intensity and spectral slope.
At \(z=0.32\), in the top row of Fig. 2, we see the acceleration of CR electrons at the counter shock of the main merger event. Fig. 3 shows that only a fraction of the shocked particles are above the critical Mach number \(\mathcal{M}_{s,\rm crit}=2\) and with that can accelerate CRs. We can readily identify the part of the shock surface that accelerates CRs in the center of the images, as it is the most synchrotron bright part and shows a relatively flat synchrotron spectrum. These CRs are accelerated at the contact surface between outwards traveling shock and the atmosphere of the in-falling halo. The steeper parts of the spectrum in the upper right corner of the images indicate that these electrons have been accelerated at earlier times of the shock propagation and have been freely cooling since. This is also evident in the synchrotron spectrum in Fig. 4, which shows a strong break above \(\nu\sim 200\) MHz. The counter shock is initially not very synchrotron bright, akin to the counter-shock in Abell 3667 (e.g. de Gasperin et al., 2022) or CIZA J2242.8+5301 (Di Gennaro et al., 2018).
At \(z=0.29\) the collision between outwards traveling counter shock and the bow-shock of the in-falling group (\(M_{3}\)) increases the sonic Mach number and with that the acceleration efficiency of the shock (see e.g. Kang
Figure 1: A simplified schematic of the merger geometry that produces the ‘Wrong Way’ relic. The initial merger between \(M_{1}\) and \(M_{2}\) drives two shocks, the weaker of which is subsequently impacted by a third substructure \(M_{3}\). This impact deforms parts the outwards traveling shock \(S_{1}\) and produces the WWR \(S_{3}\).
Figure 2: From left to right: X-ray surface brightness, CR electron energy of electrons with \(E>1\) GeV, synchrotron surface brightness at 1.4 GHz and the slope of the synchrotron spectrum between 144 MHz and 1.4 GHz. The upper three rows show the time evolution of the in-falling group in the \(xz\)-plane of the simulation, the lowest row shows the same relic at \(z=0.27\) in the \(yz\)-plane. To obtain the images the SPH data is mapped to a grid with a resolution of \(\Delta_{\rm pix}\approx 1\) kpc, which corresponds to a resolution of \(\theta_{\rm pix}\approx 0.24"\) at \(z=0.27\)
2021; Inchingolo et al., 2022, for studies of multi-shock scenarios). Fig. 3 shows that while the majority of the shocked particles remain sub-critical, the shock develops a second Mach number peak around Mach 3. This significantly increases the synchrotron surface brightness at the contact surface of the shocks, flattens the synchrotron spectrum to almost the the theoretical limit of DSA and erases the spectral break. A spectral slope of \(\alpha_{100\,\mathrm{MHz}}^{1\,\mathrm{GHz}}=-0.66\) indicates \(\mathcal{M}_{s}\approx 3.6\), in good agreement with the underlying Mach number distribution. This injection domination can also be seen in Fig. 5 where the images at \(z=0.29\) show a strong bump in synchrotron slopes between \(|\alpha|\approx 0.7-0.55\) and a small bump in synchrotron intensity around \(I_{\nu}\approx 10^{-16.5}-10^{-16}\) erg s\({}^{-1}\) Hz\({}^{-1}\) cm\({}^{-2}\).
The in-falling sub-structure deforms the outwards traveling shock towards a relic pointing "the wrong way", similar to the source _D1_ observed by Riseley et al. (2022). In the case of our relic the flat spectrum part is further extended, which we attribute to the shock being further bent inwards, compared to _D1_.
In Fig. 7 we rotate the image into the merger plane and can see how the aged, steep-spectrum population disappears behind the newly injected electrons at the inward bent relic. Comparing to the same rotation at \(z=0.32\) indicates that the best morphological fit to _D1_ would lie between \(z=0.32\) and \(z=0.29\), however there is no output available at this time.
The collision between shock waves is also visible in our X-ray image (left panel, second row in Fig. 2), which matches the detection of a shock in X-ray by Sanders et al. (2022). The in-fall scenario proposed here also produces a radio relic-like structure within \(r_{500}\), which is unlikely in the classical picture of radio relics (e.g., Vazza et al., 2012).
At \(z=0.27\), as the in-falling halo passes through the outwards traveling shock its own bow-shock collides with the older shock, causing the relic to deform further into a v-shaped morphology, such as in the counter shock to the _sausage relic_(e.g. Stroe et al., 2013; Di Gennaro et al., 2018), or the relic in Abell 2256 (Rajpurohit et al., 2022). The Mach number distribution over the shock surface has become smoother at this point, with the bulk of the shock being sub-critical, however the total number of particles with \(\mathcal{M}_{s}>2\) has increased compared to the relic at \(z=0.29\). This leads to efficient acceleration at a part of the shock surface, visible in increased synchrotron surface brightness and flatter synchrotron spectra.
In general the relic is however cooling and adiabatic compression dominated. This becomes visible in Fig. 5 where synchrotron intensity is increased in the
Figure 4: Time evolution of the synchrotron spectrum. Colors correspond to the times in Fig. 2 and 3. Dashed lines and labels show the spectral slope in the indicated frequency ranges.
Figure 3: Histograms of the sonic Mach number \(\mathcal{M}_{s}\) for the three output times shown in Fig. 2. The colors correspond to the different times and the dotted line indicate the critical Mach number beyond which CR (re-)acceleration can occur in our model.
\(10^{-17.5}-10^{-16.5}\) erg s\({}^{-1}\) Hz\({}^{-1}\) cm\({}^{-2}\) range. However, spectra are generally steeper, indicating that the increase in intensity is partly by injection and partly by adiabatic compression of an already cooling electron population.
A morphological best match for the relic in Abell 2256 is expected to lie between \(z=0.29-0.27\) shown here, however the simulation output for this time is not available. For the lower panels of Fig. 2 we rotate the image by \(90^{\circ}\), as this projection more closely resembles the observations of Di Gennaro et al. (2018). The collision of two shocks as shown here leads to a superposition of multiple DSA-like events due to strong variations of the Mach number over the shock surface. This leads to strong variations of synchrotron surface brightness and spectral shape between the regions of the shock surface where efficient (re-) acceleration can take place and the regions that are dominated by cooling and adiabatic compression.
These variations can also be seen in the integrated spectrum in Fig. 4, where the lower frequency end of the spectrum is strongly injection dominated and the high frequency end of the spectrum shows a significant steepening beyond \(\nu\sim 1\) GHz in the cooling dominated part. This result is valid for the two lower panels of Fig. 2, as we are dealing with integrated quantities. We have confirmed this by comparing the integrated spectrum obtained based on the data directly from the SPH particles as well as integrated maps under three different projections. We find no qualitative difference between these approaches.
## 4 Discussion
To discuss our findings we will compare the morphologies in chronological order to similar observed systems. Albeit the number of observed WWRs is still small, the recent discoveries due to ASKAP and MeerKAT indicate that with increased sensitivity a number of new WWRs can be detected over time.
### Abell 520
Before the onset of the WWR morphology our cluster undergoes an internal merger with an in-falling group in the cluster periphery. This group falls into the cluster at a similar trajectory as the cluster driving the current shock waves and is therefore in the path of the weaker counter-shock of the ongoing merger. A similar setup is observed in Abell 520 by Hoang et al. (2019). They detect a shock with Mach number \(\mathcal{M}_{\rm SW}=2.6^{+0.3}_{-0.2}\) propagating in SW direction with a weaker counter-shock moving with \(\mathcal{M}_{\rm SW}=2.1\pm 0.2\) in NE direction. Along the NE diagonal Chandra observations by Andrade-Santos et al. (2017) indicate in-falling matter along a similar path as the ongoing merger. This shows that the geometric setup is possible, albeit rare and Abell 520 could host a WWR in the future.
### Abell 3266
At \(z=0.32-0.29\) our WWR resembles the one observed by Riseley et al. (2022), at a distance of \(\sim 1\) Mpc from the center of Abell 3266. Their relic is very faint and shows a very steep spectral index of \(\alpha=-2.76\) in the part that is observable in frequencies above 1043
Figure 5: Histograms of the synchrotron surface brightness _(left)_ and spectral slope _(right)_ obtained from the images in Fig. 2. As before colors correspond to the times in Fig. 2 and 3. The dotted line indicates the same relic at \(z=0.27\) in the \(yz\)-plane of the simulation, as in the lowest row of Fig. 2.
MHz. The lower frequency end of the relic spectrum is significantly flatter, with a spectral index of \(\alpha\approx-0.72\). This indicates that there is a re-acceleration process which is superimposed on an older cooling spectrum, even though the very steep spectrum still poses problems under this assumption (see discussion in Riseley et al., 2022). Xray observations with eROSITA (Sanders et al., 2022) show a number of discrete sources in close proximity to _D1_, but no extended sources that could indicate an infalling group. The extended sources \(X4\) and \(X6\) that lie in (projected) close proximity to _D1_ have significantly higher photometric redshifts (\(z=0.532\) for \(X4\) and \(z=0.907\) for \(X6\)) than Abell 3266 (\(z=0.0589\), Struble and Rood, 1999), which shows that these are background sources and not in-falling groups. However, as can be seen in the left panel of Fig. 2 at \(z=0.27\), depending on the projection it is not necessarily easy to distinguish the in-falling structure in the X-ray emission.
### Psz2g145.92-12.53
Another concave radio relic detected in PSZ2G145.92-12.53 similarly shows an increase in X-ray flux with concave morphology in close proximity to the relic (see Fig. 1 in Botteon et al., 2021). We note that there is also a detected peak in X-ray surface brightness akin to the one observed in the _Rim_ region in PSZ2G145.92-12.53, indicating that similar effects may be at play there, as briefly discussed by the authors.
### Abell 2256
As previously discussed, at \(z=0.27\) in the \(xz\) plane, corresponding to the third row in Fig. 2, our WWR closely resembles the steep radio relic found in Abell 2256. Rajpurohit et al. (2022) note an association between the relic and the source F, without an X-ray counterpart (see also Owen et al., 2014; Ge et al., 2020). This could hint towards the group having passed the shock before in a similar process as discussed here. The superposition of injected and cooling parts of the shock surface can also be seen in the color-color plots in Rajpurohit et al. (2022), which indicate that the relic consists of a number of overlapping substructures. The (re-) acceleration of particles in the turbulent downstream of \(S_{1}\), which becomes the upstream of \(S_{3}\), also produces filamentary structures seen in the relativistic electron component (second panels from left in Fig. 2) as observed in Abell 2256 (see e.g. Dominguez-Fernandez et al., 2021; Wittor et al., 2023, for detailed studies of surface structures in radio relics). The observed relic shows very little spectral steepening, making it difficult to discern if it was bent against its propagation direction. The little steepening that is being detected however points towards the cluster center akin to our simulated relic, which can indicate a similar process to the one we discussed here.
### Ciza j2242.8+5301
In the case of the counter shock to the sausage relic in CIZA J2242.8+5301 the reason for the strong variations of synchrotron spectral index is still under debate (see discussion in Di Gennaro et al., 2018). In the context of our merger scenario these variations can be understood as follows: As the outwards traveling shock (\(S_{1}\)) collides with the bow-shock of the in-falling substructure (\(M_{3}\)) and is deformed, the resulting shock surface (\(S_{3}\)) shows strong variations in sonic Mach number. Wherever the sonic Mach number is \(\mathcal{M}_{s}<2\) our DSA model allows no CR (re-) acceleration and the pre-existing population is simultaneously cooling due to synchrotron and IC losses and being adiabatically compressed by the subcritical shock. This leads to a continuously steepening synchrotron spectrum, while the adiabatic compression leads to an increase in synchrotron surface brightness. In regions of the shock surface where \(\mathcal{M}_{s}>2\) there is ongoing (re-) acceleration of CR electrons, which lead to a flatter spectrum than for the cooled population. This superposition of cooling- and acceleration dominated areas on the shock surface leads to a strong variation of synchrotron spectral index, as can be seen in the bottom row of Fig. 2.
## 5 Conclusion
In this work we showed the first results of a high-resolution simulation of a massive galaxy cluster with an on-the-fly spectral Fokker-Planck solver to study the acceleration, advection and aging of CR electrons in cosmological zoom-in simulations. We applied this simulation to study a rare form of radio relics that show inward-bent, instead of the typical outward-bent morphologies. Our results can be summarized as follows:
* In complex merging systems with multiple ongoing mergers collisions between bow-shocks of in-falling substructures and outwards traveling merger shocks can deform the outwards traveling shocks in a way that is morphologically very similar to the currently reported _'Wrong Way' relics_.
* These collisions between shocks increase the Mach number at the contact surface of the shocks and with that boost the (re-)acceleration efficiency of CR electrons. This makes their detection easier than that of the cooled, outwards moving shock.
* The inclusion of an on-the-fly spectral treatment of CR electrons allows to reproduce the large vari
ance of synchrotron spectral slope across the relic surface. This variance stems from the co-existence of an aged CR electron population in the outwards traveling shock and newly injected CRs at the high Mach number regions of the shock surface.
Future work will expand our sample size of radio relics by performing further zoom-in simulations of the cluster set presented in Bonafede et al. (2011) at 250x base resolution and will study surface structure and polarisation properties of these relics, as well as \(\gamma\)-ray emission by the accelerated protons.
## Acknowledgments
We thank the anonymous referee for their helpful and constructive referee report, which improved the quality of this manuscript. We thank Nadia Boss for the illustration of Fig. 1. LMB would like to thank Barbel Koribalski, Sebastian Nuza and Harald Lesch for helpful discussion. LMB and KD are acknowledging support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311 and support for the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 860744. UPS is supported by the Simons Foundation through a Flatiron Research Fellowship at the Center for Computational Astrophysics of the Flatiron Institute. The Flatiron Institute is supported by the Simons Foundation.
The simulations were carried out at the CCA clusters "rusty" located in New York City as well as the cluster "popeye" located at the San Diego Supercomputing center (SDSC). Additional computations were carried out at the c2pap cluster at the Leibnitz Rechenzentrum under the project pn36ze.
Simulations are carried out with OpenGadget3(Springel, 2005; Dolag & Stasyszyn, 2009; Beck et al., 2016; Groth et al., 2023). Processing the simulation output was done with GadgetIO.jl(Boss & Valenzuela, 2023) and GadgetUnits.jl(Boss, 2023a). Mapping SPH data to a cartesian grid was performed with SPHKernels.jl(Boss, 2023b) and SPHtoGrid.jl(Boss, 2023c). These packages use the Julia programming language (Bezanson et al., 2014). Figures were generated with matplotlib(Hunter, 2007), using PyPlot.jl. The analysis scripts to reproduce this work are available at Zenodo via Boss(2023d). Data of the relevant simulation domain is available at Zenodo via Boss et al. (2023).
|
2309.07729 | Imitation Learning-based Visual Servoing for Tracking Moving Objects | In everyday life collaboration tasks between human operators and robots, the
former necessitate simple ways for programming new skills, the latter have to
show adaptive capabilities to cope with environmental changes. The joint use of
visual servoing and imitation learning allows us to pursue the objective of
realizing friendly robotic interfaces that (i) are able to adapt to the
environment thanks to the use of visual perception and (ii) avoid explicit
programming thanks to the emulation of previous demonstrations. This work aims
to exploit imitation learning for the visual servoing paradigm to address the
specific problem of tracking moving objects. In particular, we show that it is
possible to infer from data the compensation term required for realizing the
tracking controller, avoiding the explicit implementation of estimators or
observers. The effectiveness of the proposed method has been validated through
simulations with a robotic manipulator. | Rocco Felici, Matteo Saveriano, Loris Roveda, Antonio Paolillo | 2023-09-14T14:07:08Z | http://arxiv.org/abs/2309.07729v1 | # Imitation Learning-based Visual Servoing
###### Abstract
In everyday life collaboration tasks between human operators and robots, the former necessitate simple ways for programming new skills, the latter have to show adaptive capabilities to cope with environmental changes. The joint use of visual servoing and imitation learning allows us to pursue the objective of realizing friendly robotic interfaces that (i) are able to adapt to the environment thanks to the use of visual perception and (ii) avoid explicit programming thanks to the emulation of previous demonstrations. This work aims to exploit imitation learning for the visual servoing paradigm to address the specific problem of tracking moving objects. In particular, we show that it is possible to infer from data the compensation term required for realizing the tracking controller, avoiding the explicit implementation of estimators or observers. The effectiveness of the proposed method has been validated through simulations with a robotic manipulator.
Keywords:Visual Servoing, Imitation Learning, Visual Tracking
## 1 Introduction
Today robots are not merely asked to execute tasks in controlled environments, but they must have friendly interfaces so that everyone can conveniently operate them in everyday life. In fact, given their high level of ubiquity, more and more robots are at the disposal of people with no technical expertise. As a consequence, easy control frameworks that do not require specific engineering or programming skills are urgently needed. Furthermore, modern robots operating "in the wild" need to be highly adaptive, to cope with changes of dynamic environments.
Imitation Learning (IL) [29], also known as programming by demonstrations [4] or learning from demonstrations [3], promises to avoid specific coding duties by imitating the desired behavior as performed by an expert [5]. With respect to classic control paradigms, IL is easier and more convenient for non-expert operators, as they only need to provide demonstrations of the desired robotic tasks. Among the IL approaches, Dynamical System (DS)-based
methods [15, 27, 28] allow realizing the imitation strategy while ensuring stability properties. Adaptive capabilities, instead, can be realized by including exteroceptive sensing, such as vision, into the IL strategy. In particular, recent work [23, 24, 30] have explored the possibility to combine Visual Servoing (VS) [7, 8] with DS-based IL. We name such integration Imitation Learning for Visual Servoing (ILVS). Such combination brings benefit to both techniques: on the one side, the visual perception adds adaptability to the IL scheme to cope with environmental changes; on the other, the imitation strategy allows the addition of tasks or constraints to the VS law with no specific implementation.
This work aims at resorting to the ILVS paradigm to tackle the specific problem of tracking moving objects. Traditional tracking techniques need to estimate the motion of the target, e.g., specifically implementing a Kalman filter [6] or predictive controllers [13]. Instead, we provide a framework that leverages ILVS and extrapolates from demonstrations of tracking experiments the required information for adding the tracking skill to the basic VS law. In particular, we propose to use the so-called Reshaped Dynamical System (RDS) approach [28] to imitate the tracking behavior into the basic VS control. The resulting learning-aided control system has been validated with robotic simulations.
## 2 Background
The well-known VS technique [7, 8] employs vision to control the motion of a robot. In particular, in image-based VS, considered in this work, the objective is to zero the difference between desired and measured visual features that are directly defined on the camera image. Such visual features represent the feedback of the controller that computes camera velocities to achieve a desired task; they can be detected with standard image processing [16] or more sophisticated methods, e.g., artificial neural network [22]. Assuming an eye-in-hand configuration, a static target, and constant desired features, the basic VS law computes the camera velocity \(\mathbf{v}\in\mathbb{R}^{6}\) with a simple reactive controller. Its objective is to nullify the visual error \(\mathbf{e}\in\mathbb{R}^{k}\) between the detected and desired visual features:
\[\mathbf{v}=-\lambda\widehat{\mathbf{L}^{+}}\mathbf{e}, \tag{1}\]
where \(\lambda\) is a positive scalar gain and \(\widehat{\mathbf{L}^{+}}\in\mathbb{R}^{6\times k}\) an approximation of the Moore-Penrose pseudoinverse of the interaction matrix [7]. Such approximation is normally due to unknown information, such as the depth of the visual features1. The simple law (1) can be augmented with other tasks or constraints to enable additional skills, by employing planning techniques [10, 17], predictive controllers [2, 20, 21, 26], and other sort of optimization-based frameworks [1, 18, 19]. However, such approaches require careful design and implementation of the additional modules, which is desirable to avoid for the sake of easiness of use.
To this end, inspired by the DS paradigm, it has been proposed to augment the skills of the basic law with an ILVS strategy [24]. In particular, by using the specific RDS method [28], one could write the augmented VS law as follows:
\[\vec{v}=-\lambda\widehat{\vec{L}^{+}}\vec{e}+h\vec{\rho}(\vec{e}), \tag{2}\]
where \(\vec{\rho}(\vec{e})\) is an error-dependent corrective input used to follow complex trajectories and \(h\) is a vanishing term used to suppress \(\vec{\rho}\) after a user-defined time and retrieve stability. Such an approach can be used to generate complex visual trajectories, e.g., to avoid collisions, as done in [24]. In this work, instead, we use this formulation to enable the learned compensation terms needed to achieve the tracking of moving objects, as explained in the next section.
## 3 Method
### Problem definition
The aim of our work is to enable visual tracking of moving targets avoiding explicit programming of the required additional components of the basic law (1).
Assuming a moving target, the VS law has to account for such motion [8]:
\[\vec{v}=-\lambda\widehat{\vec{L}^{+}}\vec{e}-\widehat{\vec{L}^{+}}\frac{ \partial\vec{e}}{\partial t}, \tag{3}\]
where the second term on the right of the equation actually acts as a feedforward term to compensate for the error's time variation due to the target motion [8]. Ad hoc techniques can be implemented to estimate the term due to the motion of the target so that it can be inserted in (3) and compensated, e.g., with the introduction of integrators [9], feedforward terms [6, 12] or filters [13, 31].
In this work, instead, our aim is to rely on an imitation strategy to infer the compensation term of the law (3) from previous demonstrations of tracking experiments. In particular, inspired by DS-based approaches as in (2), we treat the reshaping term \(\vec{\rho}\), to be learnt from data, as the compensation term in (3):
\[\vec{\rho}=-\widehat{\vec{L}^{+}}\frac{\partial\vec{e}}{\partial t}. \tag{4}\]
Therefore, our problem can be formulated as follows: learn from previous demonstrations an estimate of the compensation term \(\hat{\vec{\rho}}\) so that the VS law
\[\vec{v}=-\lambda\widehat{\vec{L}^{+}}\vec{e}+\hat{\vec{\rho}}(\vec{e}) \tag{5}\]
realizes tracking of moving objects. It is worth mentioning that (5) is formally the same as (2). However, the vanishing term \(h\) is not used in (5) since the estimate \(\hat{\vec{\rho}}\) has to be always active to perform the tracking skill.
### Dataset
We assume that an "oracle" is available to provide a few demonstrations of the full desired tracking behavior. A possible oracle could be a human user, who can kinesthetically teach the robot the tracking motion, or an ideal controller in simulated environments, where all the required information is perfectly known.
During the oracle's executions, data describing how the task is carried out are recorded for each timestamp. In particular, we log the evolution of the visual error, as measured on the camera image, and the corresponding velocities, as shown to the camera in order to achieve the full desired task:
\[\mathcal{D}=\left\{\mathbf{e}_{n}^{d},\mathbf{v}_{n}^{d}\right\}_{n=1,d=1}^{N,D}, \tag{6}\]
where \(N\) is the number of samples and \(D\) the number of demonstrations. This dataset serves as the basis for the actual training set \(\mathcal{T}\) that is built as follows:
\[\mathcal{T}=\left\{\mathbf{\varepsilon}_{n}^{d},\mathbf{\rho}_{n}^{d}\right\}_{n=1,d=1} ^{N,D}, \tag{7}\]
considering that \(\mathbf{\varepsilon}_{n}^{d}=\widehat{\mathbf{L}^{+}}\mathbf{e}_{n}^{d}\) and \(\mathbf{\rho}_{n}^{d}=\mathbf{v}_{n}^{d}+\lambda\widehat{\mathbf{L}^{+}}\mathbf{e}_{n}^{d}\). Note that for all the demonstrations we consider that the value of the control gain \(\lambda\) does not change, as well as the value of the approximated inverse of the interaction matrix \(\widehat{\mathbf{L}^{+}}\) is assumed to be constant and equal to its value at convergence.
### Learning the compensation term
Given the training dataset (7), an estimate of the compensating term can be conveniently retrieved from vision data using any regression function \(\mathbf{r}\). In particular, we train a Gaussian Mixture Model (GMM) on \(\mathcal{T}\) to estimate the velocity term needed to compensate for the motion of the target object. Therefore, Gaussian Mixture Regression (GMR) is used to retrieve a smooth estimate of \(\mathbf{\rho}\), namely \(\hat{\mathbf{\rho}}\). The GMR takes as input the current value of \(\mathbf{\varepsilon}\) and provides \(\hat{\mathbf{\rho}}\) as
\[\hat{\mathbf{\rho}}=\mathbf{r}_{\mathrm{GMR}}(\mathbf{\varepsilon}\ |\ \mathcal{T}). \tag{8}\]
Therefore, the compensation term is online estimated using (8) and inserted in the control law (5) to achieve the tracking of moving objects.
## 4 Results
### Validation setup
To validate our framework we consider a robotic experiment with the robot manipulator Franka Emika [14], which has 7 joints and an Intel RealSense D435i sensor (used as a monocular camera) mounted on the end-effector. The sensor has a field of view of 69\({}^{\circ}\times\)42\({}^{\circ}\) and a frame resolution of 1920\(\times\)1080 pixel. The robot and the environment for the experiments are simulated in CoppeliaSim [11], as
shown in Fig. 1. The goal of the experiment is to allow the robot to reach a box that moves at a constant velocity on a conveyor belt. In other terms, we set the desired features so that at convergence the robot centers the box on the image plane. The box is marked with an AprilTag marker, whose corners provide the visual features for the VS law. In particular, we use the 4 corner points of the marker as visual features (i.e., \(k=8\)). As classically done in VS, 4 points are enough to ensure robust visual feedback. At the start of the experiments, the conveyor belt accelerates from zero to 0.1 m/s and keeps the velocity constant for the rest of the experiment. The implementation of the framework has been done in Python 2.7 language within the ROS [25] infrastructure.
The oracle used to collect the demonstrations consists of an ideal VS controller provided with complete knowledge of the dynamics of the target, available in the simulated environment. In practice, we use the law (3) with \(\lambda=2\), and the compensation term is built from the perfect knowledge of the box velocity. The interaction matrix has been approximated by using the value of the visual features depth at the target, which is \(0.09116\,\mathrm{m}\). In total, we have collected three demonstrations of the task. If not otherwise mentioned, the same value of the gain and the same approximation of the interaction matrix are kept for the online experiments. It is worth mentioning that other teaching methodologies could be used, such as kinesthetic teaching or teleoperation. Our choice was dictated by the need for high precision in tracking the object: a tracking controller with complete knowledge, as available in simulation, provides way better performances for precise movements than human demonstration. Furthermore, human demonstrations usually require preprocessing of the trajectories to grant exact convergence to the target in the feature space. The regression is carried out using GMM with 11 components. The number of components has been set performing a grid search. At each iteration of the controller, the framework detects new visual features and computes the new value of \(\boldsymbol{\varepsilon}\), which is used by the GMR to compute an estimate \(\hat{\boldsymbol{\rho}}\) of the compensation term that is finally inserted
Figure 1: Validation setup: the Franka Emika robot manipulator in the CoppeliaSim environment has to reach a box moving on a conveyor belt.
in the control law as in (5). The camera velocity thus computed is sent to the kinematic control of the manipulator that transforms it into joint velocities to move the robot towards the desired tracking behavior. With this setup, multiple tests are carried out to evaluate firstly the learning and replication capabilities of the demonstrated target tracking tasks, and secondly, the system's ability to adapt to new scenarios and sudden changes in the environment.
In the presented plots of the experiments, the trajectories saved in the demonstrations are shown with black dotted lines, whereas the execution of our ILVS framework is in blue; red dots represent the starts of the demonstrated trajectories, while the red crosses are their ends.
The experiments are shown in the video accompanying the paper, available at the following link: [https://youtu.be/ORdAZDmCQsA](https://youtu.be/ORdAZDmCQsA).
### Comparison with the standard VS controller
The first set of experiments aims at comparing the behavior of standard VS without compensation term, as in (1), with different values of the gain \(\lambda\), against our proposed ILVS strategy. The results of this comparison are shown in Fig. 2. As expected, even if the standard VS law manages to approach the box, due to its motion, it never manages to center it on the image plane. Indeed, a constant error between the current state of the features (denoted in red and numbered from zero to three in Fig. 2) and their desired position (in green) is kept at a steady state. Such error is lower by increasing the value of \(\lambda\) from 1 to 5, but cannot be nullified. It is indeed noteworthy that extremely high gain values cannot provide a reasonable solution to the tracking problem, since it would introduce instability in the control system [7, 8]. Unlike the standard controllers, our ILVS manages to infer from data the required information to compensate for the box motion. As shown in Fig. (d)d, ILVS provides the robot with the capability
Figure 2: Comparison between three versions of the standard VS controller and the proposed ILVS strategy.
to approach the target, reach convergence, and keep the camera above the box at the desired pose for the duration of the experiment. Indeed, in this case, the measured visual features match their desired counterpart at steady-state.
Fig. 3 shows, for the same experiment, a qualitative evaluation of the trajectories of the visual features from the demonstrations (black dotted lines), and the trajectories executed by the ILVS strategy (in blue). One can observe the ability of the system to accurately replicate the demonstrated trajectories when starting from a known location (the same as the demonstrated ones).
The correspondent quantitative results of this experiment are presented in terms of average Root-Mean-Square Error (RMSE)5 and its standard deviation measuring the accuracy of the predicted camera position and velocity, and the predicted feature position w.r.t the corresponding quantities contained in the demonstrations. In particular, the average RMSE regarding the predicted visual features position is \(22\pm 11\) pixel. For the camera positions and the linear camera velocities, the obtained results are \(33\pm 24\) mm and \(69\pm 71\) mm/s, respectively.
Footnote 5: RMSE values rounded up to the nearest whole number.
### Target tracking experiments with unseen initial conditions
The second set of experiments is carried out to test the adaptability of the system w.r.t. unseen initial conditions, i.e., when the starting orientation or the position of the camera is different from those demonstrated in the training dataset.
We tested the framework with incremental levels of difficulty. In the first experiment of this set, the initial conditions are analogous (but not identical as in the experiment shown in Fig. 2d and Fig. 3) to the ones in the training dataset. As illustrated in Fig. 4 (left), the starting point of the experiment in the image plane are in the nearby of the starting points (red dots) of the demonstrations
Figure 3: ILVS experiment with the same initial condition as in the demonstration: visual features trajectories as in the demonstrations and executed by our method.
(black dotted lines), since the initial position of the camera has been slightly moved away from the one in the demonstrations. The starting orientation of the camera is, instead, the same as the demonstrations. Given similar initial conditions, as expected, the system executes the task (blue lines in the plot) without any particular difficulties. Fig. 4 (right) shows the time evolution of the visual error for each of the four features (blue lines), which is kept to zero after a transient time for the duration of the experiment; it is also depicted the average visual error among all features (black line). Four snapshots of this experiment are presented in Fig. 5 showing the manipulator approaching the object and tracking the target moving on the conveyor belt during all its motion.
The second experiment of this set aims to evaluate the effectiveness of the approach in handling unseen conditions. In particular, at the beginning of the experiment, the camera is oriented as in the demonstrations but has a substantial difference in position. The large initial positional offset is well visible in the plot of Fig. 6, where the initial value of the visual features is far off from the
Figure 4: ILVS experiment with similar initial conditions of the demonstrated ones: visual features trajectories (left) and visual error (right).
Figure 5: Snapshots of the ILVS experiment with similar initial conditions of the demonstrations: robot’s external views (top) and camera images (bottom).
demonstration. Nevertheless, the visual features trajectories shown in Fig. 6 (left) demonstrate that the robot manage to successfully achieve the VS task, as the current value of the feature converges to their desired one, as also demonstrated in the dataset. Similarly, target tracking performance can be evaluated also from the time evolution of the visual error presented in Fig. 6 (right). From this plot, one can evaluate that the visual error is kept to zero after a transient time, even while the box continues moving on the conveyor belt. Four snapshots of this ILVS experiment can be evaluated in Fig. 7: the manipulator can reach the box and keep it tracking for all the experiments. The last two snapshots show how the robot manages to keep the box at the center of the image for the experiment, accommodating the motion induced by the conveyor belt.
The third experiment is meant to test at its greatest degree the handling of unseen initial conditions. As can be seen in Fig. 8 (left) from the position of the features in the image plane, the end-effector of the manipulator at the beginning of the experiment has a pose that is not present in the training data. Nevertheless, the robot manages to keep the box at the center of the image. The robot manages to keep the box at the center of the image.
less, the robot still manages to adjust its movement to successfully approach the moving target ensuring convergence, and once reached, it is able to track the target along its motion (see also the snapshots of Fig. 9). For this experiment, we also show in Fig. 10 the plots of the camera velocities, as demonstrated (grey lines in the plots) and as executed by our method (in blue).
For these three experiments, we provide a quantitative evaluation of the tracking performances. In particular, we considered the phase of the experiments that starts when the visual error is lower than \(5\) pixels (cfr. Fig. 4 (right), Fig. 6 (right), and Fig. 8 (right)). For this portion of the experiments, the visual error is on average \(1.795\pm 0.984\) pixels, corresponding to \(0.475\pm 0.257\) mm of error in the camera position.
Finally, we perform one last test in which we suddenly move the target object during the execution of the experiment. We observed the system's ability to adjust to such sudden and unexpected movements of the target object (tests were pursued with both low gain \(\lambda=2\) and high gain \(\lambda=10\) yielding satisfactory results in both cases). The results of this experiment can be evaluated from the accompanying video.
## 5 Discussion and conclusion
In this work, we have addressed some of the needs that arise from the introduction of friendly robots in domestic and industrial contexts where users are not necessarily experts. In these situations, adaptability and easiness of use are must-haves for robots. Therefore, we have proposed an imitation learning-based visual servoing framework for target tracking operations that avoids explicit programming, leveraging previous demonstrations of the desired behavior. Our approach relies on the VS paradigm and the DS-based IL rationale. In particular, we take advantage of the imitation strategy to learn the compensation term required to achieve the visual tracking experiment. Our approach permits us to realize the tracking without the specific implementation of an estimator or observer of the
Figure 8: ILVS experiment with unseen initial position and orientation: visual features trajectories (left) and visual error (right).
compensation term. The framework has been evaluated with several simulations, which show the ability to handle unseen initial conditions.
As shown by the experiment in Fig. 6 and Fig. 7, the robot can converge to the visual target even starting relatively far from the initial value of the demonstrations. This out-of-domain generalization capability is a structural property of our approach that effectively combines a stable component (from standard VS) and a learned one in the closed-loop control law (5). Indeed, the standard VS component always drives the robot close to the target, i.e., in the training data domain, where the learning of the compensation term is put in an ideal condition to work. Stronger generalization capabilities (e.g., to handle the doubled velocity of the conveyor belt seen during the demonstrations) would require retraining our compensation term. The stability of the proposed controller has not
Figure 10: Camera velocity during the ILVS experiment with unseen initial position and orientation: linear (top) and angular components (down).
Figure 9: Snapshots of the ILVS experiment with unseen initial position and orientation: robot’s external views (top) and camera images (bottom).
been formally investigated (for instance, using tools from the Lyapunov theory). However, in the conducted experiments, the robot was always able to reach the target with sub-millimeter precision. Moreover, we also tested the robustness to disturbances like changes in the object position on the conveyor belt. The fact that the controller behaved as expected in several practical cases suggests that it should have some (local) stability property. However, a formal stability proof is left as future work. Another interesting line for future development is the test of our framework with velocities of the object that are different from the one seen during the demonstrations. Indeed in our current study, the velocity of the object during the validation experiment is the same as the one used during the collection of the demonstrations. Finally, we plan to test our approach with real experiments; to this end, further development will be required to handle the noise in the input data (typical of real-life applications).
|
2309.04234 | Astrometric VLBI observations of H$_2$O masers in an extreme OH/IR star
candidate NSV17351 | Results of astrometric very long baseline interferometry (VLBI) observations
towards an extreme OH/IR star candidate NSV17351 are presented. We used the
VERA (VLBI Exploration of Radio Astrometry) VLBI array to observe 22\,GHz
H$_2$O masers of NSV17351. We derived an annual parallax of 0.247$\pm$0.035 mas
which corresponds to a distance of 4.05$\pm$0.59 kpc. By averaging the proper
motions of 15 maser spots, we obtained the systemic proper motion of NSV17351
to be ($\mu_{\alpha}\cos{\delta}, \mu_{\delta}$)$^{\mathrm{avg}}$ $=$ ($-$1.19
$\pm$ 0.11, 1.30 $\pm$ 0.19) mas\,yr$^{-1}$. The maser spots spread out over a
region of 20 mas $\times$ 30 mas, which can be converted to a spatial
distribution of $\sim$80 au $\times$ $\sim$120 au at the source distance.
Internal motions of the maser spots suggest an outward moving maser region with
respect to the estimated position of the central star. From single dish
monitoring of the H$_2$O maser emission, we estimate the pulsation period of
NSV17351 to be 1122$\pm$24 days. This is the first report of the periodic
activity of NSV17351, indicating that NSV17351 could have a mass of
$\sim$4\,M$_{\odot}$. We confirmed that the time variation of H$_2$O masers can
be used as a period estimator of variable OH/IR stars. Furthermore, by
inspecting dozens of double-peaked H$_2$O maser spectra from the last 40 years,
we detected a long-term acceleration in the radial velocity of the
circumstellar matter to be $0.17\pm0.03$ km\,s$^{-1}$\,yr$^{-1}$ Finally, we
determined the position and kinematics of NSV17351 in the Milky Way Galaxy and
found that NSV17351 is located in an interarm region between the Outer and
Perseus arms. We note that astrometric VLBI observations towards extreme OH/IR
stars are useful samples for studies of the Galactic dynamics. | Akiharu Nakagawa, Atsushi Morita, Nobuyuki Sakai, Tomoharu Kurayama, Hiroshi Sudou, Gabor Orosz, Akito Yuda, Daichi Kaseda, Masako Matsuno, Shota Hamada, Toshihiro Omodaka, Yuji Ueno, Katsunori M. Shibata, Yoshiaki Tamura, Takaaki Jike, Ken Hirano, Mareki Honma | 2023-09-08T09:40:26Z | http://arxiv.org/abs/2309.04234v1 | # Astrometric VLBI observations of H\({}_{2}\)O masersin an extreme OH/IR star candidate NSV17351
###### Abstract
Results of astrometric very long baseline interferometry (VLBI) observations towards an ex
treme OH/IR star candidate NSV17351 are presented. We used the VERA (VLBI Exploration of Radio Astrometry) VLBI array to observe 22 GHz H\({}_{2}\)O masers of NSV17351. We derived an annual parallax of 0.247\(\pm\)0.035 mas which corresponds to a distance of 4.05\(\pm\)0.59 kpc. By averaging the proper motions of 15 maser spots, we obtained the systemic proper motion of NSV17351 to be \((\mu_{a}\cos\delta,\mu_{a})^{\rm avg}=(-1.19\pm 0.11,\,1.30\pm 0.19)\) mas yr\({}^{-1}\). The maser spots spread out over a region of 20 mas \(\times\) 30 mas, which can be converted to a spatial distribution of \(\sim\)80 au \(\times\)\(\sim\)120 au at the source distance. Internal motions of the maser spots suggest an outward moving maser region with respect to the estimated position of the central star. From single dish monitoring of the H\({}_{2}\)O maser emission, we estimate the pulsation period of NSV17351 to be 1122\(\pm\)24 days. This is the first report of the periodic activity of NSV17351, indicating that NSV17351 could have a mass of \(\sim\)4 M\({}_{\odot}\). We confirmed that the time variation of H\({}_{2}\)O masers can be used as a period estimator of variable OH/IR stars. Furthermore, by inspecting dozens of double-peaked H\({}_{2}\)O maser spectra from the last 40 years, we detected a long-term acceleration in the radial velocity of the circumstellar matter to be \(0.17\pm 0.03\) km s\({}^{-1}\) yr\({}^{-1}\)Finally, we determined the position and kinematics of NSV17351 in the Milky Way Galaxy and found that NSV17351 is located in an interarm region between the Outer and Perseus arms. We note that astrometric VLBI observations towards extreme OH/IR stars are useful samples for studies of the Galactic dynamics.
Astrometry: -- masers (H\({}_{2}\)O) -- stars: individual (NSV17351) -- stars: variable: +
Footnote †: journal: Astroparticle Physics
## 1 Introduction
Asymptotic Giant Branch (AGB) stars are known to be at the final stage of evolution of stars with initial masses of 0.8 to 10 \(M_{\odot}\)(e.g. Karakas & Lattanzio, 2014). Among them, the stars identified as bright infrared and OH maser emitters are referred to as OH/IR stars. They represent thick circumstellar envelopes and high mass loss ratio, sometimes up to \(10^{-4}M_{\odot}\) yr\({}^{-1}\)(te Lintel Hekkert et al., 1991). OH/IR stars are thought to be a group of evolved AGB stars at the stage before they evolve to planetary nebulae (te Lintel Hekkert et al., 1991; Etoka & Diamond, 2006; Kamizuka et al., 2020). Same as the other types of AGB stars like Mira variables and semiregular variables, OH/IR stars often represent stellar pulsation in optical and infrared bands with typical pulsation periods of 100 to 1000 days. Engels et al. (1983) determined pulsation periods between 500 to 1800 days for 15 OH/IR stars
from infrared (\(K\)-band) monitoring observation.
A subclass of OH/IR stars undergoing especially intensive mass loss are recognized as extreme OH/IR stars (Justtanont et al., 2013). According to the study by Hofner & Olofsson (2018), we find that sources with such high mass loss ratio have exceedingly long pulsation period, i.e., \(\geq\)800 days. Furthermore, at the late stage of AGB phase, it is known that there is also a fraction of OH/IR stars showing no or little variability, called non-variable OH/IR stars (Engels, 2002). Towards bright OH/IR stars in Baud's catalog (Baud et al., 1981), Herman & Habing (1985) monitored the OH maser emission and found that 25% of the targets were non-variable OH/IR stars. In the evolution from AGB to post-AGB phase, it is thought that optical variability gradually diminishes as ceases the pulsation and heavy mass loss from the central star (e.g. Kamizuka et al., 2020).
Study of the circumstellar matter is important for understanding of the chemical properties of the Galaxy and the evolution of stars. AGB stars play a key role in the formation and transportation of circumstellar matter. OH/IR stars often host OH, H\({}_{2}\)O, and SiO masers in their circumstellar envelopes (Engels, 1979; Engels et al., 1986; Nyman et al., 1986). In previous research, a large amount of OH/IR stars were monitored using 1612, 1665, and 1667 MHz OH masers for determination of the OH maser flux density and its time variation (see e.g., Engels et al., 2012). Very long baseline interferometry (VLBI) observations of these masers revealed detailed structure and dynamics of circumstellar matters of AGB stars. Among them, the study by Diamond & Kemball 2003 is one of the most representative. Movies of SiO masers of TX Cam revealed ring like molecular outflows of masers explained with tangential amplification. The SiO maser shell shows significant asymmetry and can be described as a fragmented or irregular ellipsoid. Individual SiO maser components have radial motions in the range of \(\sim\)5 to 10 km s\({}^{-1}\). Decin et al. (2010) observed a nearby oxygen-rich AGB star IK Tau and presented its expansion velocity profile. The velocity data in their study were obtained from VLBI mapping studies of maser emissions from SiO, H\({}_{2}\)O, and OH. The CO expansion velocity derived from ground-based CO \(\mathbf{J}=1-0\) data was also considered in the study. They clarified the velocity field around an AGB star at a certain evolution phase through a wide range of radial distances, from an order of 10\({}^{13}\) cm to 10\({}^{16}\) cm (\(\sim\)1 au to \(\sim\)1000 au). The revealed velocity profile can be an evidence for radial acceleration in the expansion velocity of the circumstellar matter. Since H\({}_{2}\)O masers occur at a radial distance of 10\({}^{14}\) cm to 10\({}^{15}\) cm (\(\sim\)10 au to \(\sim\)100 au) where we can expect remarkable acceleration of the circumstellar envelopes, we try to explore the long-term acceleration using H\({}_{2}\)O maser data in the literature and our own observations.
NSV17351 (also named as OH224.3\(-\)1.3, IRC\(-\)10151, CRL1074, and IRAS07054\(-\)1039) is an OH/IR star (Le Squeren et al., 1979) with a spectral type of M8 (Hansen & Blanco, 1975). It has OH maser emissions at 1612, 1665, and 1667 MHz (Le Squeren et al., 1979; te Lintel Hekkert et
al. 1989), SiO masers at 86 GHz (Ukita & Goldsmith 1984; Engels & Heske 1989), 43 GHz (Kim et al. 2010), and H\({}_{2}\)O maser at 22 GHz (Blitz & Lada 1979; Crocker & Hagen 1983; Cesaroni et al. 1988; Takaba et al. 1994; Kim et al. 2010). In a study by te Lintel Hekkert et al. (1989), a stellar LSR velocity of the source is reported to be 52 km s\({}^{-1}\) with no indication of its uncertainty. Accrding to a study of SiO maser by Ukita & Goldsmith (1984), a single narrow peak at 50 km s\({}^{-1}\) with linewidth of \(\thicksim\)4 km s\({}^{-1}\) was detected. And also in Kim et al. (2010), LSR velocities of 51.8 and 51.1 km s\({}^{-1}\) are presented. Based on these velocities in previous studies, it is reasonable to assume the uncertainty in the stellar LSR velocity is \(\thicksim\)2 km s\({}^{-1}\). Though the pulsation period of NSV17351 is not yet clearly given in the literature, from our observations we found the pulsation period of the source to be longer than 800 days, suggesting that NSV17351 is a candidate extreme OH/IR star.
In order to obtain physical parameters of the celestial object, distance of the source is crucial. The phase-lag method is known as a technique to derive distances of OH/IR stars (van Langevelde et al. 1990; Engels et al. 2015; Etoka et al. 2018). Distances to several OH/IR stars using the phase-lag method are reported in Engels et al. (2015). However, uncertainties of the distances from the phase-lag method are about 20 %.
Recently, the Gaia Data Release 3 (Gaia DR3; Gaia Collaboration et al. 2022)1 provided a trigonometric parallax of 0.088\(\pm\)0.147 mas for NSV17351. Proper motion is also reported to be \(-0.03\pm\)0.16 mas yr\({}^{-1}\) and 1.88\(\pm\)0.19 mas yr\({}^{-1}\) in right ascension (RA) and declination (DEC), respectively. Gaia is very sensitive to extinction and size of star that is comparable to the measured parallax itself. Therefore, astrometry of OH/IR stars are considered to be essentially difficult for Gaia.
Footnote 1: Gaia Data Release 3; [https://www.cosmos.esa.int/web/gaia/data-release-3](https://www.cosmos.esa.int/web/gaia/data-release-3)
Trigonometric parallax distance measurements to a couples of long-period variables using astrometric VLBI observations have been reported (see e.g., Nakagawa et al. 2016). However, there has been a few VLBI astrometric results for OH/IR stars. A study by Orosz et al. (2017) is a notable one conducted with astrometric VLBI observations of 1612 MHz OH maser. They used the NRAO Very Long Baseline Array (VLBA)2 and determined parallaxes of OH/IR stars. The obtained parallax of OH 138.0+7.2 was 0.52\(\pm\)0.09 mas, making this the first sub-mas OH maser parallax. In contrast to the compactness of H\({}_{2}\)O and SiO masers, angular size of OH masers are known to be relatively extended and diffuse. OH maser parallaxes with VLBI struggle with extended maser structure and poorer resolution. Therefore, astrometric VLBI observation at higher frequency using H\({}_{2}\)O masers can help us to determine smaller parallaxes with better accuracy.
Footnote 2: Very Long Baseline Array, [https://science.nrao.edu/facilities/vtha](https://science.nrao.edu/facilities/vtha)
Sources with pulsation periods of \(\thicksim\)1000 days are thought to have initial masses of \(\thicksim\)4 M\({}_{\sun}\)
(Feast 2009). Based on studies of the AGB star evolution (e.g. Vassiliadis & Wood 1993), ages of OH/IR stars with periods of 1000 days can be estimated to be \(\thicksim\)10\({}^{8}\) yr. Recent studies predict that galactic spiral arms are bifurcating/merging in a time scale of 10\({}^{8}\) yr (Baba et al. 2013). So, OH/IR stars with ages of \(\thicksim\)10\({}^{8}\) yr can be used as a new probe to study the structure and evolution of spiral arms. Astrometric VLBI observations of NSV17351 is the first trial to use OH/IR stars for the studies of the Galactic dynamics.
In section 2, we give details of our VLBI observations and single dish monitoring observations, including the reduction process. In section 3, we present our results : the pulsation period, annual parallax and proper motion of NSV13751. Section 4 explains our interpretation of the H\({}_{2}\)O maser distribution and kinematics. We also discuss the evolutionary phase of NSV17351 based on the radial velocities of the H\({}_{2}\)O maser spectrum. We mention the difference of the astrometric results between VLBI and Gaia. A usefulness of extreme OH/IR stars for study of the Galactic dynamics is presented. We summarize our study in section 5 with our future prospects.
## 2 Observations and Data Reduction
### Single dish observations
We observed H\({}_{2}\)O maser emission of NSV17351 at a rest frequency of 22.235080 GHz (\(\rm{6_{16}\hbox{-}5_{23}}\) transition) once every 3 to 4 weeks from August 2015 to December 2020 using the 20 m aperture telescope at VERA Iriki station in order to obtain its spectra and variability. Total number of the single dish observations is 59. Since the pulsation period of NSV17351 is not found in the literature, we estimate the pulsation period from our single dish monitoring. Integration time was 10 to 40 minutes to reduce noise levels in each observation to less than 0.05 K. The conversion factor from antenna temperature to the flux density is 19.6 Jy K\({}^{-1}\). A 32 MHz bandwidth data with 1024 spectral channels gives a frequency resolution of 31.25 kHz, which corresponds to a velocity resolution of 0.42 km s\({}^{-1}\). We carried out the data reduction using the Java NEWSTAR software package developed by the Nobeyama Radio Observatory. Amplitude of the raw spectra was calibrated by the chopper-wheel method, then the spectral baseline was corrected using a polynomial function of the seventh order. We excluded a total of 0.63 MHz signals at both ends. We adopted a signal-to-noise ratio (S/N) of 4 as a detection criterion in our single dish observations.
### VLBI observations
We observed H\({}_{2}\)O maser emission of NSV17351 using the VLBI Exploration of Radio Astrometry (VERA). Eleven-epochs data were taken from April 2018 to June 2019 with an interval of about
one month. VERA is a VLBI array which consists of four 20 m aperture radio telescopes located at Mizusawa, Iriki, Ogasawara, and Ishigaki-jima (Kobayashi et al., 2003). Its maximum baseline length is 2270 km between Mizusawa and Ishigaki-jima stations. Each antenna of VERA is equipped with a dual-beam system (Kawaguchi et al., 2000) which can simultaneously observe a target maser source and an extragalactic continuum source within a separation angle between 0.3 \(\lx@math@degree\) and 2.2 \(\lx@math@degree\). Using the dual-beam system, we can calibrate short-term tropospheric fluctuations with the phase-referencing technique (Honma et al., 2008). Table 1 shows the nominal coordinates of the target maser source NSV17351 and extragalactic reference source J0709\(-\)1127. Regarding the revised coordinate of NSV17351 in the table, please see detail in section 4.1. Their separation angle is 0.80 \(\lx@math@degree\) at a position angle of 156 \(\lx@math@degree\). In our phase-referencing analysis, J0709\(-\)1127 is used as a position reference on the sky plane. Dates of the VLBI observations are presented in table 2 with the Modified JulianDate (MJD). Typical integration times of the two sources were 2 to 3 hours for each VLBI observation. The signals of left-handed circular polarization from the target and position reference source were acquired with a total data recording rate of 1 giga-bit per second (Gsps). It can cover a total bandwidth of 256 MHz. The data were recorded onto the hard disk drives of the "OCTADISK" system (Oyama et al., 2016). This entire bandwidth is divided into 16 IF channels. Each IF channel then has a width of 16 MHz. Then one IF channel (16 MHz) was assigned for the maser source NSV17351 and the remaining 15 IF channels (16 MHz \(\times\) 16 = 240 MHz) were assigned to the reference source J0709\(-\)1127. This process was conducted with a VERA digital filter unit (Iguchi et al., 2005). Correlation processing was done with the Mizusawa software correlator at Mizusawa VLBI observatory, NAOJ. In the final output from the correlator, the 16 MHz bandwidth data of NSV17351 was divided into 512 channels with a frequency resolution of 31.25 kHz. This corresponds to a velocity resolution of 0.42 km s\({}^{-1}\) at 22 GHz. In the correlator output of J0709\(-\)1127, each 16 MHz IF was divided into 32 channels.
### Data reduction of the VLBI data
We reduced the VLBI data using the Astronomical Image Processing System (AIPS1; Greisen, 2003; Fomalont, 1981) developed by the National Radio Astronomy Observatory (NRAO). Amplitude calibration was performed using the gain curves and the system noise temperatures during observations at each station. A bandpass calibration was performed using the bright continuum sources DA193, OJ287, and 3C84. In the phase-referencing process, we used the task "FRING" in AIPS to solve the residual phase, group delays, and delay rates that were included in the correlator output of the reference source J0709\(-\)1127. We adopted an integration time of three minutes ("solint = 3") and
solution interval of 12 seconds ("solsub = 15") in the task "FRING". The self-calibration was done using the tasks "IMAGR" and "CALIB" iteratively to solve residual phases with shorter time scale.
For phase calibration, we need two additional reduction procedures unique to the VERA array. Phase solutions between the dual-beam receivers, which was solved using the correlated data of noise signal injected into the two receivers from artificial noise sources installed on a feedome base of the VERA antenna (Honma et al., 2008), were also applied in the reduction process. Another calibration is related to delay-tracking models used to estimate a priori delays. Since an accuracy of the model in the correlator is not good enough for precise astrometry, we calibrated them based on more accurate delay tracking models and applied better estimates of them. More detailed phase-referencing procedures are shown in Nakagawa et al. (2008).
Image size of the reference source is 12.8 mas \(\times\) 12.8 mas square (256\(\times\)256 pixels with a pixel size of 0.05 mas/pixel). The image of J0709\(-\)1127 was obtained with a peak flux density of \(\thicksim\)280 mJy beam\({}^{-1}\). Typical noise levels of the images were \(\thicksim\)0.9 mJy beam\({}^{-1}\). Then, the solutions from the tasks "FRING" and "CALIB" were transferred to the data of the target maser source NSV17351. Size of the synthesized beam was 1.7 mas \(\times\) 0.9 mas with a major axis position angle of \(-\)32\({}^{\lx@math@degree}\). After the data calibration given above, we used the task "IMAGR" to make synthesized images of NSV17351 on 102.4 mas \(\times\) 102.4 mas square maps (2048 \(\times\) 2048 pixels with a pixel size of 0.05 mas/pixel). Using the task "IMFIT", we fitted two-dimensional Gaussian functions to bright maser spots to estimate their position and flux density. These positions are used in the parallax and proper motion fitting. Results of the fitting are given in section 3.2. We adopted a signal to noise ratio of 7 as a detection criterion in the phase-referenced maps.
## 3 Result
### Determination of the pulsation period from single dish monitoring of H\({}_{2}\)O masers
We conducted 59 single dish observations of the H\({}_{2}\)O maser of NSV17351 from 23 August 2015 (MJD 57257) to 8 December 2020 (MJD 59191). In the 59 observations, we detected H\({}_{2}\)O maser emission in 41 observations. Figure 1 shows examples of total-power spectra of NSV17351 obtained
\begin{table}
\begin{tabular}{l l l c c c} \hline Source & RA (J2000.0) & DEC (J2000.0) & \(l\) & \(b\) & note \\ \hline \hline NSV17351 & \(07^{\rm h}07^{\rm m}49^{\rm s}\).380 & \(-\)10\(\,\)\({}^{\circ}\)44\(\,\)\({}^{\circ}\)5\({}^{{}^{\prime\prime}}\).90 & 224.34\({}^{{}^{\lx@math@degree}}\) & \(-\)1.29\({}^{{}^{\lx@math@degree}}\) & nominal \\ & \(07^{\rm h}07^{\rm m}49^{\rm s}\).3876 \(\pm\) 0.0004 & \(-\)10\(\,\)\({}^{\circ}\)44\(\,\)\({}^{\circ}\)6\({}^{{}^{\prime\prime}}\).005\(\pm\)0.007 & \(-\) & \(-\) & revised \\ J0709\(-\)1127 & \(07^{\rm h}09^{\rm m}10^{\rm s}\).406578 & \(-\)11\(\,\)\({}^{\circ}\)27\(\,\)48\({}^{{}^{\lx@math@degree}}\).45555 & 225.14\({}^{{}^{\lx@math@degree}}\) & \(-\)1.33\({}^{{}^{\lx@math@degree}}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Coordinates of the sources
at VERA Iriki station on 3 May 2019 (MJD 58606, top), 28 December 2018 (MJD 58480, middle), and 22 April 2018 (MJD 58230, bottom). We can see prominent maser emissions at LSR velocities (\(V_{\rm LSR}\)) of 39 km s\({}^{-1}\) and 61 km s\({}^{-1}\). A spectrum in 22 April 2018 (MJD 58230) represents the widest emission range in our single dish observations. A center velocity of the spectrum was obtained to be \(50.1\pm 1.9\) km s\({}^{-1}\). The uncertainty was estimated from the full width at half maximum (FWHM) of each peak emission. This velocity is consistent with the source radial velocity of 52 km s\({}^{-1}\) reported by te Lintel Hekkert et al. (1989). In section 4, we use the center velocity of \(50.1\pm 1.9\) km s\({}^{-1}\) as a representative value of a stellar LSR velocity.
In table 3, we summarize results from single dish observations at VERA Iriki station. The \(T_{\rm A}^{\rm blue}\) and \(T_{\rm A}^{\rm red}\) represent antenna temperatures of blue- and red-shifted velocity components in unit of K, and \(V^{\rm blue}\) and \(V^{\rm red}\) represent \(V_{\rm LSR}\) of the components. In order to grasp overall variation of maser activity, we considered integrated intensities \(I\) in unit of K km s\({}^{-1}\) as an integration of the whole maser components over a velocity range from 30 km s\({}^{-1}\) to 75 km s\({}^{-1}\), and presented them in 7 th column in table 3. Scales of the antenna temperatures have relative uncertainties of 5-20% (Shintani et al. 2008). In this study, we uniformly applied uncertainties of 10% to all the integrated intensities. In the last column, we present rms noise levels of each single dish spectrum. Non-detections are labeled with "\(-\)" symbols.
Figure 2 shows the time variation of the integrated intensity \(I\) of the H\({}_{2}\)O maser of NSV17351 obtained from 23 August 2015 (MJD 57257) to 8 December 2020 (MJD 59191). Error bars indicate
\begin{table}
\begin{tabular}{c c c c} \hline \hline Obs.ID & Year & Date & MJD \\ \hline \hline
1 & 2018 & 16 April & 58224 \\
2 & & 19 May & 58257 \\
3 & & 02 October & 58393 \\
4 & & 01 November & 58423 \\
5 & & 29 November & 58451 \\ \hline
6 & 2019 & 04 January & 58484 \\
7 & & 03 February & 58517 \\
8 & & 12 March & 58554 \\
9 & & 09 April & 58582 \\
10 & & 04 May & 58607 \\
11 & & 01 June & 58635 \\ \hline \end{tabular}
\end{table}
Table 2: Dates of VLBI observations.
10% uncertainties of integrated intensities. Horizontal axis of figure 2 represents the MJD. In the case that we could not detect any maser emission, we put open circles with downward arrows as detection upper limits of each single dish observation. From figure 2, we found that the integrated intensity \(I\) of the H\({}_{2}\)O maser gradually decreased from MJD 57200 to MJD 57400, then, it further decreased bellow the detection limit (S/N of 4) of the single dish observations. The maser emission remained bellow S/N of 4 until the next detection on 08 September 2017 (MJD 58004). In March 2018 (\(\thicksim\)MJD 58200), the maser emission reached its maximum. Then it decreased and disappeared on 13 May 2019 (MJD 58616) again. NSV17351 recovered its H\({}_{2}\)O maser flux on 2 March 2020 (MJD 58910) and increased to 4.29 K km s\({}^{-1}\) on 8 December 2020 (MJD 59191).
Using these monitoring data, we determined the variation period of the H\({}_{2}\)O maser of NSV17351. We introduced a sinusoidal function \(I_{\rm model}\) defined as follows,
\[I_{\rm model}=\Delta I\sin(\frac{2\pi(t+\theta)}{P})+I_{0}, \tag{1}\]
where \(\Delta I\) is the amplitude of the variation, \(t\) is time, \(\theta\) is a zero-phase time lag, \(P\) is the variation period, and \(I_{0}\) is an average. From our least-squares analysis, the variation period \(P\) was solved to be 1122\(\pm\)24 days. In this fitting, we adjusted the value of \(I_{0}\) to be 2.05. The amplitude \(\Delta I\) was obtained to be 1.86 K km s\({}^{-1}\). The fitting solution is presented with a solid curve in figure 2. The non-detection data were not used in this fitting. Engels et al. (1986) concluded that the H\({}_{2}\)O maser luminosity of OH/IR stars follows the cycle of variation of infrared and OH luminosities, with a possible phase lag of order 0.2 relative to them. Hence, we think that the variation period of 1122\(\pm\)24 days estimated from our H\({}_{2}\)O maser monitoring can be considered as a pulsation period of the central star. Since our monitoring time coverage is shorter than two times of the pulsation cycle, further monitoring will be needed for careful determination of the pulsation period. Nonetheless, we note here that our estimation of the pulsation period is the first reference about the periodic activity of NSV17351and we think this source is a candidate of extreme OH/IR stars because of its long periodicity.
### Annual parallax and proper motions
In this section, we determine an annual parallax of NSV17351 using positions of maser spots detected in phase-referencing analysis. We trace the angular motion of multiple maser spots on the sky plane. We succeeded to detect H\({}_{2}\)O maser spots from 1 st to 10 th observations. From 10 out of 11 VLBI observations, we found 27 H\({}_{2}\)O maser spots in 22 velocity channels. As a detection threshold, we adopted a signal-to-noise ratio (S/N) of 7. Noise levels of our phase-referenced images of NSV17351 were 80 to 170 mJy beam\({}^{-1}\). In the 11th observation on 01 June 2019 (MJD 58635), we were not
able to detect any maser spot on the phase-referenced image. In this observation, we found maser emission in neither single dish \(\char 30\) observation nor VLBI observation. From figure 2, we can see an integrated intensity of the H\({}_{2}\)O maser on 13 May 2019 (MJD 58616) decreased below our detection limit in single dish observation. The 11 th VLBI observation on 01 June 2019 (MJD 58635) was held after this non-detection.
In table 4, we summarized properties of the detected maser spots. Each column represents the following quantities, maser spot ID in column 1, the LSR velocity (\(V_{\rm LSR}\)) in column 2, offset positions in right ascension (RA) and declination (DEC) relative to the phase tracking center in columns 3 and 4, angular proper motions in RA and DEC in columns 5 and 6, peak flux of the maser spots in column 7, signal-to-noise ratio (S/N) of the peak flux density in column 8, observation IDs where we detected
Figure 1: H\({}_{2}\)O maser spectra of NSV17351 obtained at VERA lriki station on 3 May 2019 (top), 28 December 2018 (middle), and 22 April 2018 (bottom). Noise levels of individual spectra are 0.4 Jy, 0.4 Jy, and 0.8 Jy, from the top to the bottom, respectively. On 22 April 2018, the spectrum shows a double-peaked profile. For convenience, we shifted noise floors to 25 Jy and 40 Jy for spectra in middle and top, respectively.
Figure 2: Time variation of the integrated H\({}_{2}\)O maser intensities \(I\). Left and right ends correspond to 27 June 2015 (MJD 57200) and 27 March 2020 (MJD 59300), respectively. Filled circles represent results of successful detection. In the case of non-detection, we put open circles with downward arrows as representatives of detection upper limits. Solid line is the best-fit model indicating a pulsation period of 1122\(\pm\)24 days.
maser spots in phase-referenced images in column 9. Asterisks in column 9 mean VLBI observation IDs of non-detection. When we found spatially different maser spots in an identical velocity channel, we labeled them with different spot IDs. For example, since there are two different maser spots at the \(V_{\rm LSR}=39.15\) km s\({}^{-1}\), there are IDs of 3 and 4 indicating the same LSR velocity in table 4.
In figure 3, we present examples of maser spot images used in this study. From left to right, the maser spot with \(V_{\rm LSR}\) of 39.15 km s\({}^{-1}\) (identified as ID3 in table 4) detected on (a) 16 April 2018, (b) 1 November 2018 and (c) 12 March 2019 are presented, respectively. For the spot in the map (a), formal fitted values of the peak position uncertainty are 13 \(\mu\)as and 22 \(\mu\)as in RA and DEC, respectively. For other two maps (b) and (c), we see modest structures of the maser spot. We carefully examined the maser structure, its time variation and continuity, we concluded that the southern components in the maps (b) and (c) are identical in our analysis. In the model fitting of this case, we have limited the area of its fitting to derive positions of appropriate maser components. Formal fitted values of the position uncertainty in the maps (b) and (c) are several times larger than that of the map (a). In the least-squares analysis, we regarded post-fit residuals of 0.05 mas in RA and 0.09 mas in DEC as representative errors of the maser positions across all epochs. Consequently twelve maser spots with IDs 2 to 10, 16, 23, and 24 were selected for determination of a common parallax. They were detected more than three continuous epochs of observations. Among them, maser spots with IDs of 2 and 10 were detected longer than \(\thicksim\) 1 yr. The least-squares fitting gives a parallax of NSV17351 as 0.247\(\pm\)0.010 mas. There are some factors that contribute to the parallax uncertainty. For example, uncompensated atmospheric delay differences between the reference source and the maser would be common to all spots. Structural variation of the reference source would also be common to all spots. Therefore, in estimation of an accuracy of the parallax, we adopt a more conservative estimate. By multiplying the initial parallax error of 0.01 mas by the square-root of the number of maser spots used, we obtained 0.035 (\(=0.010\times\sqrt[4]{\frac{\nu}{12}}\)) mas as a true accuracy. As a result, we adopted the parallax of \(0.247\pm 0.035\) mas for NSV17351 which corresponds to a distance of \(4.05\pm 0.59\) kpc. Figure 4 shows offsets after removing the proper motions and the fitted parallax along RA axis (top) and DEC axis (bottom), respectively. Observed data are indicated as filled circles with their colors representing the LSR velocities of each maser spot. Error bars are 0.05 mas and 0.09 mas in RA and DEC, respectively. Solid curves are the best fit models of the parallax.
In a very recent study by Reid (2022), an imaging method with " super-resolution " was proposed. As a validation of our parallax measurement, we also performed a parallax fitting using this method. We used a round CLEAN restoring beam with 0.6 mas diameter for a maser spot showing modest structure. Using this method, a parallax was estimated to be 0.248 \(\pm\) 0.035 mas showing an excellent agreement with our measurement of 0.247 \(\pm\) 0.035 mas. In this paper, we adopt the latter
value as the parallax of NSV17351.
We also derived the angular proper motions of the maser spots. In the phase-referenced image, maser spots show synthesized motions of the parallax and linear proper motions. In the fitting above, we can solve the common parallax and linear proper motions of maser spots simultaneously. We successfully derived proper motions of 15 maser spots (ID 1 to 10, 16, 23, 24, 26, 27). The proper motions along the RA and DEC axes are presented in table 4 as \(\mu_{\alpha}\)cos\(\delta\) and \(\mu_{\delta}\) in units of mas yr\({}^{-1}\), respectively. In the case when a maser spot was detected only once, or identification of the spots was difficult, we could not solve their proper motions and subsequently there are no values of proper motions. By averaging proper motions of all 15 solved maser spots, we obtained (\(\mu_{\alpha}\) cos \(\delta\),\(\mu_{\delta}\))\({}^{\rm avg}\) = (\(-1.19\,\pm\,0.11\), \(1.30\,\pm\,0.19\)) mas yr\({}^{-1}\) and we assume this motion as the systemic proper motion of NSV17351. Using this motion, we reveal the circumstellar motions of the maser spots in the next section.
## 4 Discussion
### Circumstellar distribution and kinematics of the maser spots
We discuss the circumstellar kinematics of H\({}_{2}\)O maser spots of NSV17351. In figure 5, we present the distribution of all the maser spots detected in our VLBI observations. Horizontal and vertical axes of the map are position offset from the phase tracking center of NSV17351 which is given as a nominal coordinate in table 1. Position offsets of the maser spots (\(\Delta\alpha\) cos\(\delta\), \(\Delta\delta\)) are given in table 4. Maser spots are distributed in about 20 mas \(\times\) 30 mas field which corresponds to \(\thicksim\)80 au \(\times\) \(\thicksim\)120
Figure 3: Images of maser spots at \(V_{\rm LSR}\) of 39.15 km s\({}^{-1}\) detected on (a) 16 April 2018 (MJD 58224), (b) 1 November 2018 (MJD 58423) and (c) 12 March 2019 (MJD 58554). The synthesized beams are presented at bottom left of each map. In the maps of (b) and (c), southern component was used in the parallax fitting.
au at a source distance of 4.05 kpc. From the maser distribution, we can estimate a stellar position in the map, by simply averaging positions of all the maser spots. We obtained it to be (115.49\(\pm\)6.35, \(-\)105.34\(\pm\)7.18) mas, where position uncertainties are introduced as standard deviations of all the maser spots with respect to the estimated stellar position. Estimated stellar positions are indicated by cross symbols in figure 5, the lengths of which represent the position errors on the RA and DEC axes, respectively. Based on our astrometric results, this position is presented as a revised coordinate of the source in table 1.
The linear proper motions (\(\mu_{\alpha}\)cos\(\delta\), \(\mu_{\delta}\)) presented in the previous section derived from the phase-referencing VLBI observations are a combination of proper motion of the stellar system and internal motions of individual maser spots on a rest frame fixed to the stellar system. Therefore, to deduce their internal motions, we have to subtract the systemic motion of NSV17351 from the obtained proper motions of each maser spot. We have already derived the systemic motion of NSV17351 as (\(\mu_{\alpha}\,\cos\,\delta,\mu_{\delta}\))\({}^{\rm avg}\) = (\(-\)1.19 \(\pm\) 0.11, 1.30 \(\pm\) 0.19) mas yr\({}^{-1}\) in previous section. If the maser positions and motions are distributed uniformly and isotropically about the star, the above systemic motion would be considered reliable values. However, it is unlikely that this condition would apply in this case, and the above systemic motion can be considered to include systematic errors. If one were to consider a realistic uncertainty in the internal motion, it would be, say, 0.5 mas yr\({}^{-1}\) (\(\thicksim\)10 km s\({}^{-1}\)). We consider adding a constant vector of about this magnitude and with a direction toward the southwest to all measured maser spot motions. In the calculations, \(-\)0.35 mas yr\({}^{-1}\) (\(=-\)0.5/\(\sqrt[4]{2}\) mas yr\({}^{-1}\))
Figure 4: The annual parallax of NSV17351 along RA (top) and DEC (bottom), respectively. Results from ten observations are shown with filled circles. Colors indicate LSR velocity \(V_{\rm LSR}\) of each maser spot. Solid curves are the best fit models obtained from the parallax fitting.
is added in RA and DEC. This consideration might make the distribution more likely the expansion about a central star. Taking also into account the internal motions of maser spots, the position of the central star would be considered slightly north of the position indicated by the cross symbol.
In figure 5, an elongated distribution of the maser spots along the northwest to southeast direction seems to be predominant. Regarding the LSR velocity (\(V_{\rm LSR}\)), we can find that blue- and red-shifted maser spots locate at northwest and southeast of the map, respectively. This indicates that these spots are likely tracing a weak, possibly asymmetric outward motion from the map center. Here we note that the OH masers observed at the position of central star are thought to be coming from the small nearside and farside parts of the envelope near the line of sight which intersects the central star (Szymczak, 1988). This suggests that OH masers are pumped by infrared background radiation to which stellar photons are converted by a heavy dust shell (Orosz et al., 2017). Among all the H\({}_{2}\)O maser spots in our observation, positions of the most blue-shifted maser spots are close to the estimated stellar position. Although the types of maser are different, the distribution of H\({}_{2}\)O maser obtained in our observations is roughly similar to the distribution characteristics of OH maser.
We also consider the three-dimensional velocities of the maser spots. In section 3.1, we determined the stellar LSR velocity of NSV17351 to be \(50.1\pm 1.9\) km s\({}^{-1}\). Residuals of LSR velocities (\(V_{\rm LSR}\)) of each maser spot from the stellar LSR velocity of NSV17351 correspond to velocity components relative to the central star along the line of sight. Using the three orthogonal velocity components of the maser spots, we can deduce the three dimensional expanding velocity of each maser spot. The average of the expanding velocities is 15.7 km s\({}^{-1}\) with a standard deviation of 3.3 kms\({}^{-1}\).
Figure 5: Spatial distribution and internal motions of H\({}_{2}\)O maser spots of NSV17351. Filled circles indicate maser spots and arrows indicate their internal motions. Colors indicate LSR velocities shown in a color index at right side of the map. The arrow in the top-left corner shows a proper motion of 0.5 mas yr\({}^{-1}\) corresponding to a velocity of\(9.60\) kms\({}^{-1}\) at\(4.05\) kpc. Across symbolshows the estimated stellar position. Lengths of the cross lines indicate its errors.
### Acceleration of the H\({}_{2}\)O Maser
From the single dish observations at VERA Iriki station, we obtained 41 spectra of H\({}_{2}\)O maser emission of NSV17351. A striking feature of the H\({}_{2}\)O maser spectra in this source is the presence of blue- and red-shifted bright components with a velocity separation of \(\thicksim\)20 km s\({}^{-1}\). We can also find H\({}_{2}\)O maser spectra from three previous studies. Blitz & Lada (1979), Takaba et al. (1994), and Kim et al. (2010) reported H\({}_{2}\)O maser spectra observed in 28 January 1978, 10 May 1991, and 6 June 2009, respectively.
Existence of the blue- and red-shifted components seen in our observations are consistent with those reported in the previous three works, while peak velocities seem to have been slightly shifting. To argue this velocity shift, we defined \(\Delta\)\(V\) as a velocity separation between the two peaks. To estimate errors of \(\Delta\)\(V\), we considered the full width at half maximum (FWHM) of each component. In the studies by Blitz & Lada (1979) and Kim et al. (2010), they explicitly gave velocities of the two peaks, so we used these values to derive \(\Delta\)\(V\). In the case of Takaba et al. (1994), we deduced \(\Delta\)\(V\) from figure 1 in their study.
We summarized LSR velocities of the two peaks and the velocity separation \(\Delta\)\(V\) in table 5. Observation dates are presented with MJD in column 1. In column 2 and column 4, peak velocities of blue- and red-shifted components are presented, respectively. In column 3 and column 5, the full width at half maximum (FWHM) values of each component are given. The velocity separation \(\Delta\)\(V\) is presented in column 6 with its error obtained as an average of two FWHMs. Using all the \(\Delta\)\(V\) values, we present their time variation in figure 6. From this figure, we can interpret that there is an increase in \(\Delta\)\(V\) over the last 40 years. If fitting were to be performed, this increase of \(\Delta\)\(V\) over a long time period would be \(d\Delta\)\(V\)/\(dt=0.33\pm 0.06\) km s\({}^{-1}\) yr\({}^{-1}\). Here we assumed a simple linear function and the fitted was presented with solid line in figure 6. Since lifetime of individual H\({}_{2}\)O maser cloud is of the order of \(\thicksim\)3 years (Engels 2002), it is difficult to interpret that we have been observing the same H\({}_{2}\)O gas clouds during the last 40 years. Therefore, a more natural explanation is that the velocity of the region where H\({}_{2}\)O masers are excited has been increasing during the last 40 years. Dividing this \(d\Delta\)\(V\)/\(dt\) by two, we can obtain an acceleration of the outflow velocity to be 0.17\(\pm\)0.03 km s\({}^{-1}\) yr\({}^{-1}\).
Next, we focus on the comparison of the spctrum profiles between the H\({}_{2}\)O maser and OH masers. In the three H\({}_{2}\)O maser spectral profiles reported in the previous studies by Blitz & Lada (1979), Takaba et al. (1994), and Kim et al. (2010), the two components have relatively gentle decreases (shoulders) at the outer sides of each peak (figure 7). On the other hand, in the spectral profiles obtained from our observation (top of figure 7), the two peaks show sharp cut offs at their outer sides. Especially, the sharpness of the cut off is remarkable at the outer side of the blue-shifted
peak at 38 km s\({}^{-1}\) in the spectrum on 22 April 2018 (top of figure 7). Profiles of OH maser spectra are characterized by double peaked components with sharp cut offs due to a terminal velocity of the circumstellar envelope containing OH molecules. Now we can see that the profile of recently obtained H\({}_{2}\)O maser spectra (top of figure 7) are quite similar to those formerly observed in 1612 MHz OH maser, which represents cut-offs at terminal velocities. In figure 8, we superposed H\({}_{2}\)O maser spectrum on 22 April 2018 (solid line) and 1612 MHz OH maser spectra in February 1978 (dotted line), and we found that the profiles of the two spectra resemble each other. Especially, we can find that the cut off velocity in the blue-shifted component shows exactly same velocity (38 km s\({}^{-1}\) to 40 km s\({}^{-1}\)). In the red-shifted side, velocity of the OH maser is larger than that of H\({}_{2}\)O maser. It is thought that OH molecules are supplied by photodissociation of H\({}_{2}\)O molecules carried to the outer part of the circumstellar envelope. This comparison indicates an asymmetric out flows of H\({}_{2}\)O and OH masers in the red-shifted components.
In addition to the similarity seen in the shape of the maser spectra of H\({}_{2}\)O and OH masers, we also mention the similarity of the location of the maser spots. In figure 5, the most blue-shifted H\({}_{2}\)O maser spots are seen close to the estimated stellar position. In the case of OH masers, it is known that the most blue- and red-shifted maser spots are seen in the same position in the sky plane. For example, Orosz et al. (2017) revealed that the blue- and red-shifted OH masers coincide with the same position where the central star assumed to exist. In addition, we also present a study by Rosen et al. (1978). They discuss the appearance of maser emission from the gas surrounding the star by classifying limb region and far/near-side region (the direction along the line of sight) of the central star. And they reported that in the region there is rapid acceleration due to light pressure on newly formed dust grains, farside and nearside emission and limb emission are equally likely. Hence, as presumed from figure 5, we can interpret that the most blue-shifted maser spots are superposed with the position of the central star of NSV17351 along the line of sight. They can possibly be explained as the case the emission was excited along the line of sight to the central star. We mention here that there are inplane motions for the most blue-shifted maser spots.
Engels (2002) noted that the H\({}_{2}\)O maser shell maintains favorable conditions for maser emission over a longer time, despite a limited lifetime for individual maser clouds on the order of \(\thicksim\)3 years. He suggested that H\({}_{2}\)O maser clouds do not survive in the outflow but are continuously formed and destroyed. We comprehend the result of NSV17351 that the H\({}_{2}\)O molecules were carried to the outermost region and the H\({}_{2}\)O gas has accelerated to the terminal velocity. Since a vast amount of H\({}_{2}\)O gas has been transported to the outermost regions of the circumstellar envelope, we can predict that the H\({}_{2}\)O gas will soon photodissociate to OH and H, then the OH maser will brighten. The 1612 MHz OH maser line was observed with the intensity of \(\thicksim\)400 mJy in February 1978 (Le Squeren
et al. 1979). If the OH maser emission is detected stronger than that observed in 1978 (Le Squeren et al. 1979), we might witness NSV17351 transporting H\({}_{2}\)O gas to its outer region during the last 40 years. Therefore, it is important to carefully monitor OH masers of NSV17351 to study its material flow and confirm this hypothesis.
Figure 6: Time variation of the velocity separation between red- and blue-shifted maser components (\(\Delta V\)) from 1978 to 2019 (MJD 42000 to MJD 60000). Solid line indicate a fitted model showing an acceleration of 0.33\(\pm\)0.06 km s\({}^{-1}\)yr\({}^{-1}\). Time variation of the \(\Delta V\) between MJD 57200 and 59300 can be seen in the magnified inset at top left.
Figure 7: Four representative spectra of H\({}_{2}\)O maser obtained in 1978, 1994, 2009, and 2018 from bottom to top. Scales of flux density in each spectrum are presented in the right side of the figure. Time variation of the velocity profile can be seen. In the latest spectrum, we can see the largest velocity separation and sharp cut offs at the outer sides of each peak.
### Astrometric results from VERA and Gaia
In the catalogs of Gaia Data Release 3 (DR3), we can find parallaxes of NSV17351 as 0.088\(\pm\)0.147 mas (Gaia Collaboration et al. 2022) which give relative errors of 170 %, while in our VLBI observations we obtained the parallax of 0.247\(\pm\)0.035 mas with a relative error of 14 %. The two parallaxes are barely in agreement within the error margin.
We can compare proper motions of NSV17351. The proper motions from VERA and DR3 in RA and DEC are \(\langle\mu_{\alpha}\,\cos\delta,\mu_{\delta}\rangle^{\rm avg}=(-1.19\,\pm\,0.11,\,1.30\,\pm\,0.19)\) mas yr\({}^{-1}\) and \(\langle\mu_{\alpha}\,\cos\,\delta,\mu_{\delta}\rangle^{\rm DR3}=(-0.03\pm 0.16,\, \,1.88\pm 0.19)\) mas yr\({}^{-1}\), respectively. Residual of the Gaia DR3 proper motions from VERA proper motions are \((\Delta\mu_{\alpha}\,\cos\delta,\,\Delta\mu_{\delta})=(1.16\,\pm\,0.19,\,0.58\, \pm\,0.27)\) mas yr\({}^{-1}\) which correspond to linear velocities of \((22.3\pm 3.6,\,11.1\pm 5.2)\) km s\({}^{-1}\). In the study by Nakagawa et al. (2016), difference between the two measurements from VERA and HIPPARCOS was interpreted as internal motions of the maser spots. If we apply the same interpretation to NSV17351, the residual motion can be considered as internal motion of the maser spots with respect to the central star. When we assume a general property of outflow velocity of OH/IR stars or the three-dimensional outflow velocity of \(15.7\pm 3.3\) km s\({}^{-1}\) obtained in this study (section 4.1), the velocity differences between VERA and the Gaia of \((48.6\pm 7.9,\,-32.3\pm 8.5)\) km s\({}^{-1}\) and \((22.3\pm 3.6,\,11.1\pm 5.2)\) km s\({}^{-1}\) are large to regard them only as the internal motions of the maser spots. It should also be noted that there is a systematic uncertainty of \(\pm\,10\) km s\({}^{-1}\) in attributing the average of the spot motions to that of the stellar system. In the comparison of the proper motions, this effect should also be considered.
Figure 8: Superpositions of H\({}_{2}\)O maser (solid line) and OH maser (dotted line) of NSV17351 obtained in 2018 and 1978, respectively. Flux density scales in unit of Jy for H\({}_{2}\)O and OH masers are presented in left and right vertical axes of the figure. Cut off velocity of the blue-shifted side seems to be exactly same in both spectra.
We examine the three-dimensional position and motion of NSV17351 in the Galaxy. We refer to Reid et al. (2019) for transformation of the results from our VLBI observations to the position and motion in the Galactic Cartesian coordinates. We adopt the value of \(50.1\pm 1.9\) km s\({}^{-1}\) determined in section 3.1 as the stellar LSR velocity of the source. We assumed the Galactic constants of \(R_{0}=8.15\) kpc and \(\Theta_{0}=236\) km s\({}^{-1}\), a solar motion of (\(U_{\sun}\), \(V_{\sun}\), \(W_{\sun}\)) = (10.6, 10.7, 7.6) km s\({}^{-1}\)(Reid et al., 2019), and a flat Galactic rotation curve (i.e. \(\Theta(R)=\Theta_{0}\)) in the following discussion.
We derived a three-dimensional position of NSV17351 to be (\(X\), \(Y\), \(Z\)) = (\(-2.83\pm 0.12\), \(11.05\pm 0.12\), \(-0.09\pm 0.01\)) kpc, where the origin of the coordinate system corresponds to the Galactic Center. From the value of \(Z=-0.09\pm 0.01\) kpc, we see that the NSV17351 is embedded in the Galactic thin disk. Since there is the offset between the physical plane and Galactic latitude b = 0 deg plane due to the Galactic warp and the tilted plane (Blaauw et al., 1960; Binney, 1992), we compare the Z value of NSV17351 with the Z range of nearby star-forming regions. We confirm that the Z value of NSV17351 is included in the Z range of the SFRs (i.e., \(-0.12<Z<0.11\) kpc). Figure 9 shows an enlarged view of the Milky Way Galaxy as viewed from the North Galactic pole reproduced from the Figure 2 of Reid et al. (2019). Three solid lines indicate centers of the spiral arms and grey regions indicate width of the Galactic arms enclosing 90% of sources (Reid et al., 2019). From the top to bottom, the Outer, Perseus, and Local spiral arms are shown. The filled circle indicates the position of NSV17351 with its error. Open circles indicate maser sources which have Galactocentric distances of \(>7\) kpc reported in a study by Reid et al. (2019). We also derived a three-dimensional noncircular motion of the source, i.e., a residual motion from the flat Galactic rotation, to be (\(U,V,W\)) = (\(-4\pm 3\), \(-5\pm 5\), \(-3\pm 3\)) km s\({}^{-1}\), where \(U\), \(V\), and \(W\) are directed toward the Galactic center, the Galactic rotation, and the North Galactic pole, respectively. The errors are estimated by considering errors of the parallax, the proper motion and the systemic velocity. Details of the procedure for the error estimation are given in an appendix of Sakai et al. (2020). The obtained velocities of (\(U\), \(V\), \(W\)) are comparable to those of thin-disk sources rather than thick-disk sources that include large number of evolved stars.
In figure 9, we can find that NSV17351 is located slightly outside the Perseus arm. The distance error suggests that NSV17351 may belong to the Perseus arm, but is more likely to be in the interarm region. Indeed, if we consider the \(l\)-\(V_{\rm LSR}\) plot (i.e. position velocity diagram) of HI in the Figure 3 of Reid et al. (2019), we can find that the source is located on the interarm region in between the Outer arm and the Perseus arm. The location of NSV17351 in figure 9 can be understood by considering the age of the source. It is understood that pulsation period \(P\) increases with increasing
initial mass. Mira variable stars showing a \(\log\)_P_ of \(\thicksim\)3.0 have initial masses of 3 to 4 \(M_{\odot}\)(Feast 2009). By assuming this mass range, we obtained \(\tau_{\rm MS}\) of 1.6\(\times\)10\({}^{8}\) to 3.5\(\times\)10\({}^{8}\) years from a consideration in Sparke, & Gallagher (2000), where the \(\tau_{\rm MS}\) is main sequence life time. This indicates that the age of NSV17351 is \(\thicksim\)10\({}^{8}\) years which is two orders of magnitude larger than the typical age of high mass star forming regions associated with spiral arms. In other words, we can say that we are observing a state that NSV17351 leaves the arm where it was born, but not yet sufficiently dynamically relaxed. Note that the spiral-arm assignment of NSV17351 should be revisited in the future because the Perseus and Outer arms are not accurately located in the Galactic 3rd quadrant due to the limited number of VLBI astrometric results.
In the last decade, VLBI astrometry has measured more than two hundred parallaxes of star forming regions (SFRs) (e.g., Burns et al. 2016; Motogi et al. 2016; Reid et al. 2019; VERA Collaboration et al. 2020) and evolved stars (e.g., Sudou et al. 2019; Kamezaki et al. 2016; Nakagawa et al. 2016; Nakagawa et al. 2018; Nakagawa et al. 2019), however ages of the sources are divided mainly into two groups. One is the age of \(\thicksim\)10\({}^{6}\) years for SFRs, and the other is \(\thicksim\)10\({}^{9}\) years for evolved stars. From this aspect, the extreme OH/IR star candidate NSV17351 with an estimated age of \(\thicksim\)10\({}^{8}\) years can be regarded as a valuable source which fills the time scale gap between 10\({}^{6}\) years and 10\({}^{9}\) years. In recent studies of spiral arms in disk galaxies, there has been a long-standing question about how spiral arms are created and maintained. The quasi-stationary density wave theory (e.g., Lin, & Shu 1964) and the dynamic spiral theory (e.g., Sellwood, & Carlberg 1984; Baba 2015) are two major theories under discussion. Spiral arms do not show rigidly rotating patters but rather differentially rotating dynamic patterns. The amplitudes, pitch angles, and pattern speeds of spiral arms are not constant, but change within a time span of 1-2 rotational periods at each radius (Baba 2015). In the Milky Way Galaxy, rotational periods at the location of the Sun correspond to a time scale of \(\thicksim\)10\({}^{8}\) years. For better understanding, it is important to gather samples representing various ages as suggested by previous papers (e.g., Dobbs & Pringle 2010; Miyachi et al. 2019). In this context, the extreme OH/IR stars with an age of \(\thicksim\)10\({}^{8}\) years could be suitable samples, and astrometric VLBI is a powerful and promising method to determine their three-dimensional positions and kinematics.
## 5 Summary
We presented the first astrometric results towards an extreme OH/IR star candidate NSV17351 using the VERA VLBI array at 22 GHz. From the single dish observations, we found that NSV17351 has an extremely long period of 1122\(\pm\)24 days based on the variation of the H\({}_{2}\)O maser emission. From our VLBI observations, we derived an annual parallax of 0.247\(\pm\)0.035 mas, which corresponds
to a distance of 4.05\(\pm\)0.59 kpc. We revealed distribution and kinematics of H\({}_{2}\)O maser spots of NSV17351. Inplane distribution of 20 mas \(\times\) 30 mas (\(\sim\)80 au \(\times\)\(\sim\)120 au at a source distance) and weak asymmetric outflow were confirmed. By averaging proper motions of maser spots, the systemic proper motion of NSV17351 was obtained to be \((\mu_{\alpha}\cos\delta,\mu_{\delta})^{\rm avg}=(-1.19\pm 0.11,\,1.30\pm 0.19)\) mas yr\({}^{-1}\). NSV17351 has a characteristics of double-peaked H\({}_{2}\)O maser spectrum. We could trace the evolution of spectrums for 40 years, and we estimated the acceleration of circumstellar matter to be \(0.17\pm 0.03\) km s\({}^{-1}\) yr\({}^{-1}\).
We derived a three dimensional position of NSV17351 in the Milky Way Galaxy. The source was located in the interarm region between the Outer and the Perseus arms. The mass of NSV17351, inferred from its pulsation period, is 3 to 4 \(M_{\odot}\), and its age is estimated to be \(\sim\)10\({}^{8}\) years. This is consistent with a situation that the star is located in the interam region, and away from the spiral arm where it was born.
## Acknowledgement
We acknowledge the members of the VERA project for their kind collaboration and encouragement. Data analysis were in part carried out on common use data analysis computer system
Figure 9: Enlarged face-on view of the Milky Way reproduced from a study by Reid et al. (2019). The Galactic center is at (0, 0) kpc and the Sun is indicated with the symbol (\(\sun\)) at (0, 8.15) kpc. The filled circle with an error bar indicates the position of NSV17351. Open circles indicate maser sources which have Galactocentric distances of \(>7\) kpc in (Reid et al., 2019). Three spiral arms are presented. Solid lines indicate centers of the spiral arms. Grey regions indicate widths of the Galactic arms in which 90% of sources are enclosed (Reid et al., 2019).
at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This research was supported by the leadership program of NAOJ in FY2020.
|
2309.13748 | Does the "most sinfully decadent cake ever" taste good? Answering Yes/No
Questions from Figurative Contexts | Figurative language is commonplace in natural language, and while making
communication memorable and creative, can be difficult to understand. In this
work, we investigate the robustness of Question Answering (QA) models on
figurative text. Yes/no questions, in particular, are a useful probe of
figurative language understanding capabilities of large language models. We
propose FigurativeQA, a set of 1000 yes/no questions with figurative and
non-figurative contexts, extracted from the domains of restaurant and product
reviews. We show that state-of-the-art BERT-based QA models exhibit an average
performance drop of up to 15\% points when answering questions from figurative
contexts, as compared to non-figurative ones. While models like GPT-3 and
ChatGPT are better at handling figurative texts, we show that further
performance gains can be achieved by automatically simplifying the figurative
contexts into their non-figurative (literal) counterparts. We find that the
best overall model is ChatGPT with chain-of-thought prompting to generate
non-figurative contexts. Our work provides a promising direction for building
more robust QA models with figurative language understanding capabilities. | Geetanjali Rakshit, Jeffrey Flanigan | 2023-09-24T20:38:48Z | http://arxiv.org/abs/2309.13748v1 | Does the "most sinfully decadent cake ever" taste good? Answering Yes/No Questions from Figurative Contexts
###### Abstract
Figurative language is commonplace in natural language, and while making communication memorable and creative, can be difficult to understand. In this work, we investigate the robustness of Question Answering (QA) models on figurative text. Yes/no questions, in particular, are a useful probe of figurative language understanding capabilities of large language models. We propose FigurativeQA, a set of 1000 yes/no questions with figurative and non-figurative contexts, extracted from the domains of restaurant and product reviews. We show that state-of-the-art BERT-based QA models exhibit an average performance drop of up to 15% points when answering questions from figurative contexts, as compared to non-figurative ones. While models like GPT-3 and ChatGPT are better at handling figurative texts, we show that further performance gains can be achieved by automatically simplifying the figurative contexts into their non-figurative (literal) counterparts. We find that the best overall model is ChatGPT with chain-of-thought prompting to generate non-figurative contexts. Our work provides a promising direction for building more robust QA models with figurative language understanding capabilities.
## 1 Introduction
_"Questions are never indiscreet. Answers sometimes are."_
- _Oscar Wilde_
One of the many interesting phenomena occurring in natural language is the presence of figurative language, which, while making communication creative and memorable Danescu-Niculescu-Mizil et al. (2012), may sometimes also prove difficult to understand Zayed et al. (2020). This includes (but is not limited to) linguistic constructs such as idioms, similes, metaphors, rhetorical questions, hyperbole, personification, sarcasm, and irony. It may be particularly difficult for non-native speakers to interpret figurative expressions, and phenomena like sarcasm are often missed altogether Joshi et al. (2016). Given that figurativeness is commonplace in everyday communication Lakoff and Johnson (2008), progress in the field of Natural Language Understanding (NLU) would be incomplete without figurativeness understanding. Consequently, figurative text has been studied in various downstream NLP tasks such as machine translation Dankers et al. (2022), textual entailment Agerri (2008), Chakrabarty et al. (2021), Liu et al. (2022) and dialog models Jhamtani et al. (2021), inter-alia. However, to the best of our knowledge, there has not been a systematic study of figurative language understanding capabilities of question answering models.
We focus on yes/no questions for our question answering (QA) task. Yes/no questions are a good test of figurative language understanding because correctly answering them requires the reader to correctly understand the figurative language. Extractive QA, on the other hand, is not a good test for figurative language understanding because it does not require actually understanding the figurative language.
Figure 1: To answer the question “Did the cake taste good?” based on the context, a Question Answering (QA) model needs to be able to correctly infer the meaning of the figurative text “the most sinfully decadent ever”
For example, if we were to pose the question "How did the cake taste?" from the context "The cake was described as the most sinfully decadent ever.", an answer such as "sinfully decadent" from an extractive QA model doesn't really tell us that the model understands the meaning of the figurative text "sinfully decadent". It simply copies the figurative text and it's up to the reader to infer what the answer means.
However, in order to answer a yes/no question such as "Did the cake taste good?", a QA model needs to correctly infer that "sinfully decadent" means _rich and delicious_, or in other words, _really good_, and therefore the answer would be _yes_.
Despite the lack of attention of figurative language for QA tasks, figurative language is extremely common in some important domains, such as online reviews. We randomly sampled 100 reviews from the train split of the Yelp Challenge Dataset1, and observe that at least 60% of these reviews contain figurative expressions. Users often write strongly-worded reviews, to express highly positive or highly negative opinions about products or services (Mohammad et al., 2016), which tend to contain figurative language.
Footnote 1: We use the version in Huggingface Datasets ([https://huggingface.co/datasets/yelp_review_full](https://huggingface.co/datasets/yelp_review_full)), from the paper (Zhang et al., 2015)
We show that it can be challenging for existing QA models to draw inferences from figurative text. To do this, we present a new dataset, _Figure/**QA**_, consisting of 1000 yes/no questions and accompanying figurative and non-figurative contexts constructed from Amazon product reviews (Niculae and Danescu-Niculescu-Mizil, 2014) and Yelp restaurant reviews (Oraby et al., 2017). In Figure 2, we show examples from FigurativeQA, in two domains: Amazon product reviews and Yelp restaurant reviews, for both figurative and non-figurative contexts. Each context is accompanied by a question-answer pair, and in the case of figurative contexts, manually constructed and automatically obtained non-figurative versions of the context.
We develop a variety of methods for improving QA performance for figurative text. We prompt powerful LLMs like GPT-3 and ChatGPT to convert figurative contexts to literal as an intermediate step to question answering. We then provide these literal contexts as input to state-of-the-art QA models, resulting in considerable gains in performance. The best performance is achieved by the chain-of-thought prompting method from ChatGPT in a few-shot setting, where the model generates a simplified version of the context and then generates the yes/no answer. We also use these LLMs to generate domain-specific training data to fine-tune models specifically for this task.
The outline of the paper is as follows: after reviewing related work (SS2), we introduce our new QA dataset for figurative language, FigurativeQA, in (SS3). We report baseline QA performance on FigurativeQA and introduce a method for simplifying figurative language to non-figurative by prompting GPT-3 and ChatGPT, which we use to improve our baseline QA models (SS4, 5, 6). We report our experiments with chain-of-thought prompting in SS7. We prompt ChatGPT to generate in-domain training data for figurative question answering (SS8). We finally conclude in (SS10). The FigurativeQA dataset can be accessed at [https://github.com/geetanjali-rakshit/figurativeQA](https://github.com/geetanjali-rakshit/figurativeQA).
## 2 Related Work
Figurative language has been a difficult problem for many natural language processing (NLP) applications. A number of computational approaches have been proposed to study their occurrence in text (Veale et al., 2016; Qadir et al., 2016; Kordoni, 2018; Mao et al., 2018; Zhang et al., 2017; Troiano et al., 2018), including generation of figurative language (Chakrabarty et al., 2020; Zhou et al., 2021).
The idea of converting metaphors to their literal counterparts has been previously explored for machine translation by Mao et al. (2018), where metaphors in English text are first identified and then converted to a literal version by using word embeddings and WordNet, before doing machine translation into Chinese. In dialog systems, a similar approach was employed by Jhamtani et al. (2021), where idioms and metaphors in utterances are converted to literal versions using a dictionary lookup-based method. Our work is closest to Jhamtani et al. (2021), except that we explore the robustness of QA systems in a machine comprehension setup, instead of dialog models, to figurative language, which, to the best of our knowledge, is a first. Our automatic approach to creating rephrased non-figurative versions of figurative text is done using pre-trained language models, rather than rule-based methods which have been shown to be errorprone (Jhamtani et al., 2021). In a concurrent work,
Chakrabarty et al. (2022) have also done prompting on GPT-3 to create their figurative NLI dataset, FLUTE, as well as obtain an explanation of the NLI labels in this dataset.
To our knowledge, there are no QA datasets specifically designed for figurative language understanding, but some existing QA datasets do contain figurative language. The FriendsQA dataset Yang and Choi (2019) is a dialog-based QA dataset constructed from dialogs from the TV series Friends. While it does contain metaphors and sarcasm, the focus of the dataset is not figurative language, and it is not ideal for testing figurative language understanding as it is unclear how much of the dataset is figurative. The dialog nature of the dataset further contributes to making it challenging and complicates studying the effect of figurativeness. Another dataset that requires figurative language understanding is the RiddleSense dataset Lin et al. (2021), which comprises of riddles, but unlike ours, it's modeled as an open-domain QA task rather than a machine comprehension task. Parde and Nielsen (2018) show that questions about novel metaphors from literature are judged to be deeper than non-metaphorical or non-conventional metaphors by humans, but their focus is on generating deep questions rather than testing the robustness of QA models. Dankin et al. (2022) construct yes/no questions using templates to detect the presence of metaphors in a few-shot setting.
## 3 FigurativeQA Dataset
The contexts in FigurativeQA comes from two sources: Amazon product reviews Niculae and Danescu-Niculescu-Mizil (2014), and Yelp restaurant reviews Oraby et al. (2017). We extract both figurative and non-figurative contexts from each source. We manually construct yes/no questions and answers on top of these contexts. Figure 2 shows examples from FigurativeQA. The data statistics from each source Amazon and Yelp) and each split (figurative and non-figurative) are summarized in Table 1.
For the Amazon part of FigurativeQA, we use Niculae and Danescu-Niculescu-Mizil (2014)'s dataset of figurative comparisons. Of the 1260 comparisons in this dataset, we extract instances where all 3 annotators are in agreement about figurativeness (i.e., average figurativeness score of greater than 3). We then randomly pick 150 exam
Figure 2: Examples from the figurative and non-figurative splits of FigurativeQA, from Amazon product reviews and Yelp restaurant reviews. The figurative text fragments within the contexts are shown in bold and italics.
ples to form the set of figurative contexts. From the examples with a low average figurativess score, we select 150 examples to form the set of non-figurative contexts.
For the Yelp part of the dataset, the contexts are sourced from (Oraby et al., 2017)'s NLG dataset for the restaurant domain. Since highly positive or highly negative reviews are more likely to contain figurative language, we extract these first, and then, similar to (Niculae and Danescu-Niculescu-Mizil, 2014), use comparator expressions to get a set of reviews likely to be rich in figurative content. We then manually examine these reviews to annotate 350 examples of figurative contexts and non-figurative contexts, each.
The figurative contexts from FigurativeQA tend to contain more _similes_, since comparator patterns (_"like"_, _"as"_, or _"than"_) were used to extract the text. However, we observe that many of these examples also contain other kinds of figurative constructs such as metaphor, idiom, hyperbole, sarcasm, etc. Table 2 shows the number of occurrences of various kinds of figurative constructs that we observe in a random set of 100 figurative contexts, each from Amazon and Yelp in FigurativeQA. (Oraby et al., 2017) note that one of the most prominent characteristics of restaurant reviews in the Yelp corpus is the prevalence of hyperbole, which we also observe in this sample. A context may contain multiple figurative elements, coming from different text fragments within the context. Also, in some cases, the same text fragment may denote multiple kinds of figurative constructs. In Figure 3, we show some examples of various kinds of figurative constructs occurring in FigurativeQA.
For each context in FigurativeQA, we construct a yes/no question. For the figurative contexts, we make sure to pose a question such that answering it would require an understanding of the figurative text present in the context. For the non-figurative contexts, we construct questions similar to the ones for the figurative contexts. Additionally, for the figurative contexts, we construct questions similar to the ones for the figurative contexts. Additionally, for the figurative contexts, we construct questions similar to the ones for the figurative contexts.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Amazon**} & \multicolumn{2}{c}{**Yelp**} \\ & **Fig.** & **Non-fig.** & **Fig.** & **Non-fig.** \\ \hline
**Yes** & 77 & 76 & 174 & 175 \\
**No** & 73 & 74 & 176 & 175 \\ \hline
**Total** & 150 & 150 & 350 & 350 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution of yes/no questions from Amazon product reviews and Yelp restaurant reviews for figurative and non-figurative contexts
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Split** & **Context** & **fig.** & **construct** \\ \hline
**Amazon** & _The books are **like potato chips** - you **can’t eat just one**_. & _simile, idiom_ \\ & _So when my laptop battery puffed up **like a balloon**_, _I dreaded paying_ & _simile, hyperbole_ \\ & _the cost of replacement_. & _Really_, this novel feels more **like a footnote** to the series whereas_ & _simile, idiom_ \\ & _The Guntslinger was a novel that **stood extremely well on its own**_. & _simile, sarcasm_ \\ \hline
**Yelp** & _i had the chicken fajitas_, which came with a giant flour tortilla that was **as hot as hades**_. & _simile, hyperbole_ \\ & _the cheese was scarce as was the meat_, and the taste was nothing to_ & _idiom_ \\ & _**write home about**_. & _i ate as much as i could because truly, underneath the **salt mine** on_ & _metaphor, hyperbole_ \\ & _my plate_, _was some damn fine cored beef hash!_ & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution of occurrences of various kinds of figurative constructs in a random sample of 100 contexts from Amazon and Yelp each. It is common for a context to contain multiple figurative expressions, so these do not add up to 100% (refer to Figure 3 for examples).
Figure 3: Examples of figurative constructs observed in the Amazon and Yelp datasets. The figurative text fragments within the contexts are shown in bold and italics. In case of multiple labels occurring in the same context, the first bold fragment corresponds to the first label, and so on. In some cases, the same text fragment may have multiple labels (as in row 2)
urative contexts extracted from Amazon and Yelp, we manually create non-figurative counterparts that preserve the meaning and overall content.
### Inter-annotator Agreement
Annotations for all the data in FigurativeQA (figurativeness scores for the examples from Yelp, construction of question-answer pairs, manual conversion of figurative contexts to non-figurative) were done by an in-house-trained graduate-student annotator. To assess the quality of figurative and non-figurative contexts for the Yelp contexts, we perform a second round of annotations with another trained annotator on a random sample of 50 contexts. This resulted in an inter-annotator agreement of 0.72 on figurativeness, calculated by Cohen's \(\kappa\).
Similarly, to assess the overall quality of FigurativeQA, we randomly sample 50 figurative contexts for double annotation, which gives an additional set of annotations for the answers to the questions. The inter-annotator agreement on the answers was found to be 0.96, calculated by Cohen's \(\kappa\). To validate the effectiveness of the questions for figurativeness comprehension, we also asked the annotators to indicate if answering the question required them to understand figurative text fragments present in the context. In the random sample of 50, in 49 cases the annotators were in agreement that this was indeed the case.
## 4 Do QA models find answering questions from figurative contexts harder?
Using FigurativeQA as a test set, we show that current models struggle to do well on figurative text compared to literal ones. We use a RoBERTa-based Liu et al. (2019) QA model fine-tuned on BoolQ to show this. The BoolQ dataset Clark et al. (2019) consists of yes/no questions from the Natural Questions dataset. We use the training split of BoolQ containing 9,427 examples to fine-tune RoBERTa-base and report its performance on FigurativeQA in Table 3. We find that the RoBERTa QA model performs poorly on the figurative contexts compared to the non-figurative contexts, with a drop in performance of \(\sim\)8.5% points for Amazon, and \(\sim\)23% points for Yelp. We observe that switching the figurative contexts for their manually created non-figurative counterparts shoots these numbers up in both cases, by \(\sim\)10% points and \(\sim\)23% points, for Amazon and Yelp, respectively. More powerful models like ChatGPT (in a few-shot setting) perform significantly better on figurative contexts, but still don't match the results on non-figurative versions of the contexts. This indicates that the conversion of figurative language to non-figurative language may help improve QA performance.
## 5 Can prompting or finetuning LLMs help simplify figurative contexts?
We posit that answering questions from figurative contexts is harder, and that simplifying the figurative context into its literal/non-figurative version improves QA performance. However, since the task of manually converting figurative text to non-figurative is expensive and time-consuming, we propose to do this automatically by prompting GPT-3 Brown et al. (2020) in two ways. First, we use GPT-3 (da-vinci-003) and ChatGPT in a few-shot setting to generate non-figurative/literal versions of all the figurative contexts in FigurativeQA.2 We also used a similar approach to prompt ChatGPT. Please refer to Appendix A for model details and the prompts used. Second, we use a trained version of GPT-3 (da-vinci-002) fine-tuned specifically for the task of converting figurative to literal text.
Footnote 2: The experiments for this method to convert figurative text to non-figurative were performed by running API calls to the OpenAI da-vinci model. For each context, this took less than 1 second, for a total of less than 18 min and cost less than 8 USD for the entire dataset.
As an intrinsic evaluation of the effectiveness of our prompting method, we manually evaluate the correctness of the non-figurative/literal contexts generated by prompting GPT-3 on a random sam
\begin{table}
\begin{tabular}{l l l} \hline \hline & **Amazon** & **Yelp** \\ \hline
**RoBERTa-BoolQ** & & \\ Fig (Original) & 83.4 \(\pm\) 0.7 & 66.8 \(\pm\) 1.4 \\ Fig (manual non-fig) & **93.5 \(\pm\) 1.1*** & **90.0 \(\pm\) 1.4*** \\ Non-fig (Original) & 92.0 \(\pm\) 1.4 & 89.8 \(\pm\) 1.7 \\ \hline
**ChatGPT(few-shot)** & & \\ Fig (Original) & 92.6\(\pm\)1.1 & 80.6\(\pm\)0.7 \\ Fig (manual non-fig) & **93.8 \(\pm\)0.3*** & **83.3\(\pm\)1.6*** \\ Non-fig (Original) & \(93.5\pm 0.3\)* & \(88.7\pm 1.8\)* \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy of RoBERTa-base fine-tuned on BoolQ, and ChatGPT (few-shot), on the figurative split, manually created non-figurative version of the figurative split, and non-figurative split of FigurativeQA. (We reran experiments 1000 times with bootstrap resampling. The numbers reported are the mean and std-dev. \({}^{*}\) denotes statistically significant results, with \(p<0.05\) calculated using the Wilcoxon signed-rank test. The numbers in **bold** are the best results.)
ple of 100 instances each, from Amazon and Yelp in FigurativeQA. We label each generated literal version as either **"correct"**, where none of the figurative expressions are present but the meaning is preserved, or **"incorrect"** where the generated output is the same/similar to the original context or the meaning has changed. Please note that this is a rather strict evaluation of correctness, as in some cases, some of the figurative text fragments present in the context is converted to literal, while the context may still be left with some amount of figurativeness (possibly arising from multiple figurative text fragments present in the context). Table 4 shows the results from the manual evaluation of the GPT-3 and ChatGPT outputs. We observe that these models are pretty good at converting figurative language in FigurativeQA to literal, with nearly 89% and 81% of the outputs from GPT-3 judged to be correct in Amazon and Yelp, respectively, and 92% and 88% for ChatGPT. In Figure 4, we show examples of non-figurative text generated from GPT-3 and ChatGPT.
We next explore using a fine-tuned version of GPT-3 to generate literal versions of figurative texts. Chakrabarty et al. (2022) propose the FLUTE dataset for Natural Language Inference (NLI), which has 9,000 figurative NLI instances, and explanations for the NLI labels. We extract the premise-hypothesis pairs with the label _"entailment"_ from the training split of FLUTE to fine-tune GPT-3 (3,182 examples in total). We used the _davinci_ model from OpenAI as the base model and fine-tuned for 4 epochs, with all default settings. We didn't perform any hyper-parameter tuning.3 Table 4 (row 3) shows the results from manual evaluation of the fine-tuned GPT-3 outputs.
Footnote 3: To fine-tune GPT-3 on the FLUTE dataset, it cost about 15 USD and took 62 minutes.
## 6 Can automatically generated non-figurative text improve QA performance?
We observed that ChatGPT has a much stronger performance on FigurativeQA than the baseline model of RoBERTa finetuned on BoolQ (section 4), and both of these models do better on non-figurative texts. We showed that both GPT-3 and ChatGPT can be effectively used to simplify figurative text into their non-figurative counterparts (section 5). We next experiment with simplifying contexts to boost QA performance. As competitive baselines, we also report zero-shot and few-shot QA performance4 of GPT-3 and ChatGPT in Table 5. Besides the RoBERTa-finetuned-on-BoolQ baseline (previously described in section 4, we also fine-tune GPT-3 on the training split of BoolQ. For fine-tuning GPT-3, we used the _davinci_ model from OpenAI as the base model and fine-tuned for 4 epochs, with all default settings. We didn't perform any additional hyper-parameter tuning.
Footnote 4: Please refer to Appendix B for details about prompting GPT-3 and ChatGPT as a QA system.
In our experiments, we do not require knowing which contexts are figurative and which are non-figurative. We simply input both figurative and non-figurative contexts to the LLM to simplify any figurative language that is present, regardless if the context actually contains figurative language. In Table 5, we show that this method exhibits significant gains over the baseline RoBERTa model. We also report the performance of using GPT-3-finetuned-FLUTE as input to the RoBERTa baseline.
## 7 Can we use chain-of-thought prompting for improving QA performance on FigurativeQA?
Wei et al. (2022) have shown chain-of-thought prompting in Large Language Models (LLMs) to be effective for solving tasks requiring complex reasoning. Since understanding figurative language often requires implicit reasoning, we investigate the effect of applying chain-of-thought prompting for FigurativeQA using ChatGPT. (Our few-shot prompt for the chain-of-thought method is described in Appendix C.) This approach gives us the highest overall accuracy on FigurativeQA (Table 5).
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Amazon** & **Yelp** \\ \hline GPT-3 & 89\% & 81\% \\ \hline ChatGPT & **92\%** & **88\%** \\ \hline Finetuned GPT-3 & 80\% & 77\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation of non-figurative outputs from GPT-3 and ChatGPT, showing the percentage of generated outputs that do not contain figurative expressions, but preserve the original meaning of the figurative context.
## 8 Can we prompt LLMs to generate training data for FigurativeQA?
Due to the lack of training data for question answering with figurative contexts, our supervised models are all finetuned on BoolQ. We hypothesize that adding synthetically generated QA pairs for this task will improve performance of the fine-tuned models. We prompt ChatGPT to generate synthetic training data (we tried a variety of prompts - refer to Appendix D for the prompt used). We use contexts from both Amazon and Yelp domains to generate question answer pairs from ChatGPT. For the Amazon contexts, we randomly sample reviews from 4 categories (Books, Electronics, Jewelry and Digital Music) from Amazon Product reviews from McAuley and Leskovec (2013). From these reviews, we extract sentences containing comparator patterns ("like", "as", "than") and use them as contexts, as they are more likely to contain figurative expressions. For the Yelp contexts, we extract sentences from Oraby et al. (2017)'s NLG dataset also containing the same comparator patterns, but not already included in FigurativeQA. (Refer to Appendix E for statistics of the data generated for training.)
We find that further finetuning RoBERTa-finetuned-on-BoolQ on synthetic QA data generated from ChatGPT yields the best performance on the figurative split of both Amazon and Yelp (Table 5).
## 9 How much does the prompting method help with handling figurativeness?
Our experiments show that the process of converting figurative text into literal by prompting GPT-3 may effectively be used for improving question answering performance. We also study the effect of our method on the degree of figurativeness present in the text. The Amazon reviews data from Niculae and Danescu-Niculescu-Mizil (2014) comes labeled with figurativeness scores of 1-4, with 3 sets of annotations. Using the average figurativeness scores, we bin the Amazon reviews examples in FigurativeQA into 4 splits, and compute the improvement in QA performance when using our method over the baseline. As evident from Figure 5, the more figurative examples show a higher gain in QA performance.
## 10 Conclusion and Future Work
We demonstrate that current QA models have reduced accuracy when answering questions from
Figure 4: Examples of non-figurative contexts generated from GPT-3, for Amazon and Yelp. The figurative text fragments within the contexts are shown in **bold** and _italics_.
figurative contexts compared to literal ones. This indicates the need for QA models that are robust to figurative language. By manually creating non-figurative versions of these contexts, we observe a significant improvement in performance. To automate this approach, we propose a method of prompting GPT-3 to produce simplified, non-figurative contexts, which yields significant performance gains over the baseline. Chain-of-thought prompting using ChatGPT has the best overall performance on FigurativeQA. We hope that our method and dataset will spur more research into question answering with figurative language.
## 11 Acknowledgments
This research was supported in part by a research gift from eBay Inc.
## Limitations
Our dataset contains the specific domains of Amazon and Yelp reviews, which is English-only, and results and conclusions may not generalize to other domains or languages. The text generated by prompting GPT-3 may sometimes produce text that is not faithful to the original figurative text.
Figure 5: Figurativenss Vs Accuracy for the instances from Amazon reviews
\begin{table}
\begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{**Fig.**} & \multicolumn{2}{c|}{**Non-fig.**} & \multicolumn{2}{c|}{**Overall**} \\ & **Amazon** & **Yelp** & **Amazon** & **Yelp** & **Amazon** & **Yelp** \\ \hline \hline \multicolumn{2}{|l|}{**Zero-Shot**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT-3 (zero) & 71.9\(\pm\)1.2 & 60.2\(\pm\)3.2 & 88.7\(\pm\)0.9 & 86.0\(\pm\)2.2 & 80.3\(\pm\)1.1 & 73.1\(\pm\)2.1 \\ \hline ChatGPT (zero) & 91.0\(\pm\)0.7 & 87.4\(\pm\)2.6 & 93.0\(\pm\)0.3 & 88.6\(\pm\)2.4 & 92.0\(\pm\)0.5 & 88.0\(\pm\)2.3 \\ \hline \hline \multicolumn{2}{|l|}{**Few-Shot**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT-3 (few) & 85.7\(\pm\)1.8 & 64.1\(\pm\)3.7 & 90.2\(\pm\)0.8 & 88.3\(\pm\)1.9 & 88.0\(\pm\)1.1 & 76.2\(\pm\)2.7 \\ \hline \multicolumn{2}{|l|}{**CatGPT (few)**} & 92.6\(\pm\)1.1 & 80.6\(\pm\)0.7 & 93.5\(\pm\)0.3 & \(88.7\pm 1.8\) & 93.0\(\pm\)0.7 & 84.7\(\pm\)1.1 \\ \hline \hline \multicolumn{2}{|l|}{**Supervised**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline RoBERTa & 83.2\(\pm\)1.1 & 66.8\(\pm\)2.6 & 92.2\(\pm\)1.4 & 89.6\(\pm\)1.7 & 87.7\(\pm\)0.9 & 78.2\(\pm\)1.6 \\ \hline GPT-3-BoolQ & 86.3\(\pm\)2.1 & 69.2\(\pm\)3.8 & 88.7\(\pm\)0.9 & 86.5\(\pm\)1.2 & 87.5\(\pm\)1.4 & 77.9\(\pm\)2.2 \\ \hline RoBERTa & **95.3\(\pm\)0.5** & **92.3\(\pm\)0.7** & 95.8\(\pm\)1.2 & 90.8\(\pm\)1.6 & 95.5\(\pm\)0.7 & **91.5\(\pm\)0.9** \\ +synthetic & & & & & & \\ \hline \hline \multicolumn{2}{|l|}{**Simplified Contexts**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT-3+ & \(86.5\pm 1.1\) & \(73.4\pm 1.7\) & \(92.4\pm 1.1\) & \(89.4\pm 1.7\) & \(89.5\pm 3.2\) & \(81.5\pm 1.2\) \\ RoBERTa & & & & & & \\ \hline GPT-3-FLUTE & 88\(\pm\)0.7 & 69.4\(\pm\)2.1 & 92.0\(\pm\)0.4 & 89.5\(\pm\)1.2 & \(90.0\pm 1.4^{*}\) & \(79.4\pm 2.3^{*}\) \\ +RoBERTa & & & & & & \\ \hline ChatGPT+ & 88.7\(\pm\)1.6 & 75.3\(\pm\)3.5 & 92.2\(\pm\)1.1 & 89.5\(\pm\)2.1 & 90.5\(\pm\)1.2 & 82.4\(\pm\)3.2 \\ RoBERTa & & & & & & \\ \hline ChatGPT+ & 89.3\(\pm\)0.8 & 91.0\(\pm\)0.3 & 95.7\(\pm\)0.7 & 91.2\(\pm\)0.2 & 92.5\(\pm\)0.4 & 91.1\(\pm\)0.3 \\ ChatGPT (few) & & & & & & \\ \hline ChatGPT+CoT & 94.7\(\pm\)0.3 & 91.6\(\pm\)1.2 & **96.4\(\pm\)1.1** & **91.4\(\pm\)0.7** & **95.6\(\pm\)0.9** & **91.5\(\pm\)1.1** \\ \hline \end{tabular}
\end{table}
Table 5: QA accuracy on FigurativeQA. (We reran experiments 1000 times with bootstrap resampling. The numbers reported are the mean and std-dev. \({}^{*}\) denotes results that are not statistically significant compared to the best results, with \(p<0.05\) calculated using the Wilcoxon signed-rank test. The numbers in **bold** are the best results.) GPT-3 finetuned models use da-vinci-002 as the base model. |
2310.00131 | Event-Triggered Control of Neuron Growth with Actuation at Soma | We introduce a dynamic event-triggering mechanism for regulating the axonal
growth of a neuron. We apply boundary actuation at the soma (the part of a
neuron that contains the nucleus) and regulate the dynamics of tubulin
concentration and axon length. The control law is formulated by applying a
Zero-Order Hold (ZOH) to a continuous-time controller which guides the axon to
reach the desired length. The proposed dynamic event-triggering mechanism
determines the specific time instants at which control inputs are sampled from
the continuous-time control law. We establish the existence of a minimum
dwell-time between two triggering times that ensures avoidance of Zeno
behavior. Through employing the Lyapunov analysis with PDE backstepping, we
prove the local stability of the closed-loop system in $L_2$-norm, initially
for the target system, and subsequently for the original system. The
effectiveness of the proposed method is showcased through numerical
simulations. | Cenk Demir, Shumon Koga, Miroslav Krstic | 2023-09-29T20:47:52Z | http://arxiv.org/abs/2310.00131v2 | # Event-Triggered Control of Neuron Growth
###### Abstract
We introduce a dynamic event-triggering mechanism for regulating the axonal growth of a neuron. We apply boundary actuation at the soma (the part of a neuron that contains the nucleus) and regulate the dynamics of tubulin concentration and axon length. The control law is formulated by applying a Zero-Order Hold (ZOH) to a continuous-time controller which guides the axon to reach the desired length. The proposed dynamic event-triggering mechanism determines the specific time instants at which control inputs are sampled from the continuous-time control law. We establish the existence of a minimum dwell-time between two triggering times that ensures avoidance of Zeno behavior. Through employing the Lyapunov analysis with PDE backstepping, we prove the local stability of the closed-loop system in \(\mathcal{H}_{1}\)-norm, initially for the target system, and subsequently for the original system. The effectiveness of the proposed method is showcased through numerical simulations.
## I Introduction
Recent advancements in neuroscience draw from various disciplines such as mathematics, physics, and engineering [16, 17, 19]. These fields are crucial for understanding the structure and functioning of neurons in the nervous system and addressing neurological issues. One major challenge in this context is the growth of axons, which are similar to wires and are constructed through the assembly of tubulin proteins. Axons function as connectors between neurons for transmitting electrical signals. Some neurological diseases, such as Alzheimer's disease [27] and spinal cord injuries [26], can damage axons by impeding the assembly process of tubulin proteins, leading to halted growth or degeneration. Researchers are developing new therapies to treat these diseases. One promising therapy is called ChABC which involves injecting a bacterial enzyme that digests the axon growth inhibitors [2]. Following this therapy, axon growth can be sustained [18]. However, it's important to note that ChABC has a limitation: ChABC requires repeated injections of the bacterial enzyme since it rapidly loses its activity at \(37^{\circ}\)C [25]. To enhance the effectiveness of this therapy, the amount of enzymes required to achieve the desired axon length and the intervals for these repeated injections must be identified.
Studying the behavior of tubulin proteins can help achieve the desired axon length. For this reason, numerous mathematical models have employed Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs) to clarify tubulin behavior [28, 29, 36]. Authors of [7] model the axon growth process as a coupled PDE-ODE with a moving boundary, akin to the Stefan problem, effectively describing the associated physical phenomena. In this model, the PDE represents tubulin concentration's evolution along the axon, and the ODEs describe both the evolution of axon length and tubulin concentration at the growth cone. Given that this model captures this critical information about axon growth, it is worthwhile to consider designing a controller to regulate tubulin concentration and axon length. Over the past two decades, researchers have developed a technique. known as boundary control of PDE systems by PDE backstepping, to regulate PDEs by defining control laws at their boundaries [24]. Following the development of this technique, boundary control was expanded to the class of coupled PDE-ODE systems [23, 34, 35]. While the majority of these contributions are typically assumed to have a constant domain size over time, some researchers focused on a moving domain over time specifically addressing the Stefan problem, as discussed in [8, 15, 30]. Besides these studies, backstepping-based control techniques are constructed for Stefan problem in [22] with global stability results. Following this progress, several works have proposed local stability results for nonlinear hyperbolic PDEs, as seen in [3, 37]. We achieved local stability results for the first time for nonlinear parabolic PDEs with a moving boundary for the axon growth problem in our previous works [4, 6], and with input delay in [5].
While the control designs mentioned above operate in continuous time, certain technologies necessitate the performance of control actions only when necessary due to constraints on energy, communication, and computation [13]. To deal with this problem, an event-triggered control strategy is proposed for PID controllers in [1], and for state feedback and output feedback controllers for linear and nonlinear time-invariant systems in [14] and [20]. Authors of [31] ensured asymptotic stability for a closed-loop system with state feedback control laws by employing an event-triggering mechanism, characterizing it as a hybrid system. This characterization caned the constraints associated with the event-triggering mechanism and this relaxation is detailed in [12] as a dynamic triggering approach. In addition to its application in ODE systems, the authors of [9] successfully applied an event-triggered mechanism to boundary control hyperbolic PDE systems. This innovative approach paved the way for the utilization of event-triggered boundary control in reaction-diffusion PDEs as demonstrated in [10]. For Stefan problem, both static and dynamic event-triggered boundary
control laws were developed by the authors of [32] and [33]. Furthermore, an event-triggering mechanism was employed to transition between safety utilizing CBFs and stability for Stefan problem with actuator dynamics as discussed in [21]. In this paper, we introduce a novel dynamic event-triggering mechanism for the axon growth problem which consists of a coupled reaction-diffusion-advection PDE and nonlinear ODEs with a moving boundary. With this dynamic event-triggering mechanism, we aim to address the key question around appropriate time intervals for administering therapy.
The contributions of this paper include (i) designing a control law for neuron growth with Dirichlet boundary actuation, (ii) developing a dynamic event-triggering mechanism for coupled reaction-diffusion-advection PDEs and nonlinear ODEs with a moving boundary, (iii) analyzing Zeno behavior avoidance, (iv) demonstrating local stability for the closed-loop system. Indeed, this work is pioneering in event-triggering boundary control for axon growth and marks the first local stability analysis using event-triggering mechanisms for PDE systems.
## II Modeling of Axon Growth
In this section, we present the mathematical model governing axon growth. This model includes a coupled system of PDEs and ODEs, featuring a dynamic boundary that describes tubulin behavior along the axon and axon growth. We also introduce the steady-state solution for a target axon length and a reference error system.
### _Axon growth model by a moving boundary PDE_
The evolution of tubulin along the axon serves as the primary catalyst for the axon growth process, and to understand this process, we rely on two assumptions to create a mathematical model which are described in our previous work [4]. Thus, the axonal growth can be modeled as
\[c_{t}(x,t)= Dc_{xx}(x,t)-ac_{x}(x,t)-gc(x,t), \tag{1}\] \[c(0,t)= -q_{\text{s}}(t),\] (2) \[c(l(t),t)= c_{\text{c}}(t),\] (3) \[l_{\text{c}}\dot{c}_{\text{c}}(t)= (a-gl_{\text{c}})c_{\text{c}}(t)-Dc_{x}(l(t),t)\] \[-(r_{\text{g}}c_{\text{c}}(t)+\tilde{r}_{\text{g}}l_{\text{c}})( c_{\text{c}}(t)-c_{\infty}),\] (4) \[\dot{l}(t)= r_{\text{g}}(c_{\text{c}}(t)-c_{\infty}), \tag{5}\]
In this model, the PDE state \(c(x,t)\) represents tubulin concentration within the axon. ODE states include \(c_{\text{c}}(t)\) for tubulin concentration in the growth cone, \(l(t)\) for axon length, and \(q_{\text{s}}(t)\) for tubulin concentration in the soma. Tubulin proteins move along the axon at a rate \(a\) and degrade at a constant rate \(g\). The diffusivity constant in (1) is represented by \(D\). Axonal growth stops when the tubulin concentration in the cone reaches equilibrium, denoted as \(c_{\infty}\). The other parameters in this model are explained with details in our previous work [4] and [6].
### _Steady-state solution_
For a desired axon length, \(l_{s}\), we first derive a steady-state solution of the concentration. The steady-state solution of (1)-(5) is obtained as follows
\[c_{\text{eq}}(x)=c_{\infty}\left(K_{+}e^{\lambda_{+}(x-l_{\text{s}})}+K_{-}e^ {\lambda_{-}(x-l_{\text{s}})}\right), \tag{6}\]
where
\[\lambda_{+}= \frac{a}{2D}+\frac{\sqrt{a^{2}+4Dg}}{2D},\ \lambda_{-}=\frac{a}{2D}-\frac{ \sqrt{a^{2}+4Dg}}{2D}, \tag{7}\] \[K_{+}= \frac{1}{2}+\frac{a-2gl_{\text{c}}}{2\sqrt{a^{2}+4Dg}},\ \ K_{-}=\frac{1}{2}-\frac{a-2gl_{\text{c}}}{2\sqrt{a^{2}+4Dg}}. \tag{8}\]
We obtain the steady-state input for the concentration in the soma as
\[q_{\text{s}}^{*}=-c_{\infty}\left(K_{+}e^{-\lambda_{+}l_{\text{s}}}+K_{-}e^{- \lambda_{-}l_{\text{s}}}\right). \tag{9}\]
### _Reference error system_
Let us consider the following reference error states
\[u(x,t)=c(x,t)-c_{\text{eq}}(x), \tag{10}\] \[z_{1}(t)=c_{\text{c}}(t)-c_{\infty},\quad z_{2}(t)=l(t)-l_{\text {s}},\] (11) \[U(t)=-(q_{\text{s}}(t)-q_{\text{s}}^{*}). \tag{12}\]
where \(U(t)\) is the reference error input. Utilizing (10)-(12), (6) and (9) in the governing equations (1)-(5), we derive the reference error system as
\[u_{t}(x,t)= Du_{xx}(x,t)-au_{x}(x,t)-gu(x,t), \tag{13}\] \[u(0,t)= U(t),\] (14) \[u(l(t),t)= c_{\text{c}}(t)-c_{\text{eq}}(l(t)),\] (15) \[\dot{z}_{1}(t)= \tilde{a}_{1}z_{1}(t)-\beta u_{x}(l(t),t)-\kappa z_{1}(t)^{2}+ \beta f_{1}(z_{2}(t))\] \[-\beta\tilde{a}_{2}z_{2}(t),\] (16) \[\dot{z}_{2}(t)= r_{\text{g}}z_{1}(t), \tag{17}\]
where the constants in (13)-(17) are
\[\tilde{a}_{1} =\frac{a-r_{\text{g}}c_{\infty}}{l_{\text{c}}}-g-\tilde{r}_{ \text{g}},\quad\beta=\frac{D}{l_{\text{c}}}, \tag{18}\] \[\tilde{a}_{2} =c_{\infty}\left(\lambda_{+}^{2}K_{+}+\lambda_{-}^{2}K_{-}\right), \quad\kappa=\frac{r_{\text{g}}}{l_{\text{c}}},\] (19) \[f_{1}(z_{2}(t))= -c_{\infty}\left(K_{+}\lambda_{+}e^{\lambda_{+}z_{2}(t)}+K_{-} \lambda_{-}e^{\lambda_{-}z_{2}(t)}\right)\] \[+\tilde{a}_{2}z_{2}(t)+c_{\infty}\frac{a-gl_{\text{c}}}{D}. \tag{20}\]
The ODEs can be written using the state vector \(X(t)\in\mathbb{R}^{2}\) as \(X(t)=[z_{1}(t)\quad z_{2}(t)]^{\top}\). The system (15)-(17) simplifies
\[u_{t}(x,t)= Du_{xx}(x,t)-au_{x}(x,t)-gu(x,t), \tag{21}\] \[u(0,t)= U(t),\] (22) \[u(l(t),t)= h(X(t)),\] (23) \[\dot{X}(t)= AX(t)+f(X(t))+Bu_{x}(l(t),t), \tag{24}\]
Fig. 1: Schematic of neuron and state variables
where
\[A= \left[\begin{array}{cc}\tilde{a}&-\beta\tilde{a}_{2}\\ r_{\rm g}&0\end{array}\right],\quad B=\left[\begin{array}{c}-\beta\\ 0\end{array}\right], \tag{25}\] \[f(X(t))= -\kappa z_{1}(t)^{2}+\beta f_{1}(z_{2}(t)),\] (26) \[h(X(t))= e_{1}X(t)+\tilde{h}(e_{2}X(t)),\] (27) \[\tilde{h}(z_{2}(t))= c_{\infty}\left(1-K_{+}e^{\lambda_{+}z_{2}(t)}-K_{-}e^{ \lambda_{-}z_{2}(t)}\right). \tag{28}\]
## III Continuous-time and Sample-based Control Design
First, we linearize nonlinear ODEs in (24) around zero states as
\[u_{t}(x,t)= Du_{xx}(x,t)-au(x,t)-gu(x,t), \tag{29}\] \[u_{x}(0,t)= U(t),\] (30) \[u(l(t),t)= H^{\top}X(t),\] (31) \[\dot{X}(t)= A_{1}X(t)+Bu_{x}(l(t),t), \tag{32}\]
where the vector \(H\in\mathbb{R}^{2}\) is defined as
\[A_{1}=\left[\begin{array}{cc}\tilde{a}_{1}&\tilde{a}_{3}\\ r_{\rm g}&0\end{array}\right],\ H=\left[\begin{array}{cc}1&-\frac{(a-gl_{c})c _{\infty}}{D}\end{array}\right]^{\top}, \tag{33}\]
where \(\tilde{a}_{3}=\frac{a^{2}+Dg-agl_{c}}{D^{2}}\).
In this paper, our continuous-time control design relies on a backstepping transformation, as outlined in [4]. This transformation maps the linear reference error system \((u,X)\) to a corresponding nonlinear target system \((w,X)\) by utilizing the following backstepping transformation.
\[w(x,t)= u(x,t)-\int_{x}^{l(t)}k(x,y)u(y,t)dy\] \[-\phi(x-l(t))^{\top}X(t), \tag{34}\] \[u(x,t)= w(x,t)+\int_{x}^{l(t)}q(x,y)w(y,t)dy\] \[+\varphi(x-l(t))^{\top}X(t), \tag{35}\]
where \(k(x,y)\in\mathbb{R}\) and \(\phi(x-l(t))\in\mathbb{R}^{2}\) are the gain kernel functions are explicitly described in [4]. We suppose the desired target system as
\[w_{t}(x,t)= Dw_{xx}(x,t)-aw_{x}(x,t)-gw(x,t)\] \[-\dot{l}(t)F(x,X(t)), \tag{36}\] \[w(0,t)= 0,\] (37) \[w(l(t),t)= 0,\] (38) \[\dot{X}(t)= (A_{1}+BK^{\top})X(t)+Bw_{x}(l(t),t), \tag{39}\]
and \(K\in\mathbb{R}^{2}\) is chosen to ensure the stability of \(A+BK\) such that it is Hurwitz, satisfying
\[k_{1}>\frac{\tilde{a}_{1}}{\beta},\quad k_{2}>\frac{\tilde{a}_{3}}{\beta}. \tag{40}\]
Furthermore, we describe the redundant nonlinear term \(F(x,X(t))\in\mathbb{R}\) in (36), arising from the moving boundary, as \(F(x,X(t))=\left(\phi^{\prime}(x-l(t))^{T}-k(x,l(t))C^{T}\right)X(t)\).
### _Control law_
The continuous-time control law is obtained based on the boundary condition (37) of the target system at \(x=0\), utilizing the gain kernel solutions as detailed in [4].
\[\phi(x)^{\top}= \left[H^{\top}\quad K^{\top}-\frac{1}{D}H^{\top}BH^{\top}\right] e^{N_{1}x}\begin{bmatrix}I\\ 0\end{bmatrix}, \tag{41}\] \[k(x,y)= -\frac{1}{D}\phi(x-y)^{\top}B, \tag{42}\]
where \(N_{1}\) is defined in equation (37) in [4]. Substituting \(x=0\) into the transformation (34), we have the control law as
\[U(t)=\int_{0}^{l(t)}k(0,y)u(y,t)dx+\phi(-l(t))X(t), \tag{43}\]
It is worth noting that the solutions of the inverse gain kernels \(q(x,y)\) and \(\varphi(x)\) can be found in [4]. This invertibility of the backstepping transformation is essential for demonstrating the stability of the \((u,X)\)-system.
### _Sample-based control law_
We aim to stabilize the closed-loop system (1)-(5) by using sampling for the continuous-time controller defined in (43) with the increasing sequence of time \((t_{j})_{j\in\mathbb{N}}\). Thus, the control input is given by
\[U(t_{j})= \int_{0}^{l(t_{j})}k(0,x)u(x,t_{j})dx+\phi(-l(t_{j}))X(t_{j}). \tag{44}\]
which means that the boundary condition (14) is modified as
\[u(0,t)=U(t_{j}). \tag{45}\]
Now, the reference error system can be written as
\[u_{t}(x,t)= Du_{xx}(x,t)-au_{x}(x,t)-gu(x,t), \tag{46}\] \[u(0,t)= U(t_{j}),\] (47) \[u(l(t),t)= h(X(t)),\] (48) \[\dot{X}(t)= AX(t)+f(X(t))+Bu_{x}(l(t)t). \tag{49}\]
To establish stability results, we transform the reference error system in (46)-(49) to the target system using the transformation in (34). Thus, the target system is
\[w_{t}(x,t)= Dw_{xx}(x,t)-aw_{x}(x,t)-gw(x,t)\] \[-\dot{l}(t)\left(k(x,l(t))u(l(t),t)-\phi^{\prime}(x-l(t))^{T}X(t)\right)\] \[-\phi(x-l(t))^{\top}f(X(t))\] \[-\left(\phi^{\prime}(x-l(t))^{\top}B+\frac{a}{D}\phi(x-l(t))^{ \top}B\right)h^{*}(X), \tag{50}\] \[w(0,t)=d(t),\] (51) \[w(l(t),t)=h^{*}(X(t)),\] (52) \[\dot{X}(t)=(A+BK)X(t)+f(X(t))+Bw_{x}(l(t),t), \tag{53}\]
where
\[h^{*}(X(t))=\left(z_{1}(t)+\tilde{h}(z_{2}(t))\right)-H^{\top}X(t). \tag{54}\]
and the error between continuous-time control law in (43) and sample-based control law in (44) is defined as
\[d(t)=U(t)-U(t_{j}). \tag{55}\]
## IV Event-triggered based boundary control
In this section, we introduce the event-triggered state-feedback control approach, deriving sampling times for our control law to obtain the event-triggering mechanism.
**Definition 1**.: The design parameters are \(\gamma>0\), \(\eta>0\), \(\rho>0\) and \(\beta_{i}>0\) where \(i\in\{1,...5\}\). The event-based controller consists of two trigger mechanisms:
1. The event-trigger: The set of all event times are in increasing sequence and they are denoted as \(I=\{t_{0},t_{1},...\}\) where \(t_{0}=0\) with the following rule * If \(S(t,t_{j})=\emptyset\), then the set of the times of the events is \(\{t_{0},...,t_{j}\}\). * If \(S(t,t_{j})\neq\emptyset\), the next event time is \(t_{j+1}=\inf\left(S(t,t_{j})\right)\) where \[S(t,t_{j})=\{t\in\mathbb{R}_{+}|t>t_{j}\wedge d^{2}(t)>-\gamma m(t)\}\] (56) for all \(t\in[t_{j},t_{j+1})\), \(d(t)\) is given by (55) and \(m(t)\) satisfies the ODE \[\dot{m}(t)= -\eta m(t)+\rho d(t)^{2}-\beta_{1}X(t)^{2}-\beta_{2}X(t)^{4}\] \[-\beta_{3}|w_{x}(0,t)|^{2}-\beta_{4}||w(x,t))||^{2}\] \[-\beta_{5}|w_{x}(l(t),t)|^{2}.\] (57)
2. The control action: The feedback control law that is derived in (44) for all \(t\in[t_{j},t_{j+1})\) where \(j\in\mathbb{N}\).
**Lemma 1**.: _Under the definition of the state feedback event-triggered boundary control, it holds that \(d^{2}(t)\leq-\gamma m(t)\) and \(m(t)>0\) for \(t\in[0,F)\), where \(F=\sup(I)\)._
Proof.: From the definition of the event-trigger approach, it is guaranteed that \(d^{2}(t)\leq-\gamma m(t)\), \(t\in[0,F)\)/ It yields
\[\dot{m}(t) \leq-(\eta+\gamma\rho)m(t)-\beta_{1}X^{2}(t)-\beta_{2}X^{4}(t)\] \[-\beta_{3}w_{x}(0,t)^{2}-\beta_{4}||w(x,t)||^{2}-\beta_{5}w_{x}(l (t),t)^{2} \tag{58}\]
for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\). Considering time continuity of \(m(t)\), we can obtain
\[m(t)\leq m(t_{j})e^{-(\eta+\rho\sigma)(t-t_{j})}\] \[-\int_{t_{j}}^{t}e^{-(\eta+\rho\sigma)(t-\tau)}\left(\beta_{1}X( \tau)^{2}+\beta_{2}X(\tau)^{4}\right)d\tau\] \[-\int_{t_{j}}^{t}e^{-(\eta+\rho\sigma)(t-\tau)}(\beta_{3}|u_{x}(0,\tau)|^{2}d\tau+\beta_{5}|u_{x}(l(\tau),\tau)|^{2})\] \[-\int_{t_{j}}^{t}e^{-(\eta+\rho\sigma)(t-\tau)}\beta_{4}||u(x,\tau ))||^{2} \tag{59}\]
From the event-trigger mechanism definition, we have that \(m(t_{0})=m(0)<0\). Therefore, the estimate of \(m(t)\) in (59) ensures that \(m(t)<0\) for all \(t\in[0,t_{1}]\). This can be generalized for all \(t\). which means it can be shown that \(m(t)<0\) for \(t\in[0,F)\).
**Lemma 2**.: _For \(d(t)\), it holds that_
\[(\dot{d}(t))^{2}\leq \rho_{1}d^{2}(t)+\alpha_{1}X(t)^{2}+\alpha_{2}X(t)^{4}+\alpha_{3} w_{x}(0,t)^{2}\] \[+\alpha_{4}||w(x,t)||^{2}+\alpha_{5}w_{x}(l(t),t)^{2} \tag{60}\]
_for some positive constants \(\rho_{1},\ \alpha_{1},\ \alpha_{2},\ \alpha_{3},\ \alpha_{4}\) and \(\alpha_{5}\) for all \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\)._
Proof.: By taking the time derivative of (55) and (44), along with the system (46)-(49) we get
\[\dot{d}(t)= \left(\dot{\phi}(0)B+\frac{1}{D}H^{\top}B\right)d(t)+H^{\top}Bu_{ x}(0,t)\] \[+(Dk(0,l(t))+\phi(l(t))B)\,u_{x}(l(t),t)\] \[+\int_{0}^{l(t)}\left(Dk_{yy}(0,y)+ak_{y}(0,y)\right)u(y,t)dy\] \[-\int_{0}^{l(t)}\left(g+\dot{\phi}(0)B+\frac{1}{D}H^{\top}B\right) k(0,y)u(y,t)dy\] \[-\dot{l}(t)\dot{\phi}(-l(t))X(t)+\phi(-l(t))f(X(t))\] \[+\left(\phi(-l(t))^{\top}B-ak(0,l(t))\right)u(l(t),t)\] \[-\left(\dot{\phi}(0)B+\frac{1}{D}H^{\top}B-A\right)\phi(-l(t))X(t)\] \[+\dot{l}(t)h(X(t))k(0,l(t)) \tag{61}\]
By using inverse transformation of backstepping in (35), Young's and Cauchy Schwarz's inequalities, one can show
\[||u||^{2}\leq \left(\frac{3}{2}+\frac{3}{2}\left(\int_{0}^{l(t)}\int_{x}^{l(t)} q(x,y)^{2}dydx\right)^{1/2}\right)^{2}||w||^{2}\] \[+\frac{3}{2}\left(\int_{0}^{l(t)}\varphi(x-l(t))^{\top}dx\right)^ {2}X(t)^{2} \tag{62}\]
Applying the same procedure, we can also demonstrate that
\[u(l(t),t)^{2}\leq 2w(l(t),t)^{2}+2(\varphi(0)^{\top})^{2}X(t)^{2}, \tag{63}\] \[u_{x}(0,t)^{2}\leq 4w_{x}(0,t)^{2}+4q(0,0)^{2}w(0,t)^{2}\] \[+4\int_{0}^{l(t)}q_{x}(0,y)^{2}dyd||w(y,t)||^{2}\] \[+4(\varphi(-l(t))^{\top})^{2}X(t)^{2},\] (64) \[u_{x}(l(t),t)^{2}\leq 4w_{x}(l(t),t)^{2}+4(\varphi(0)^{\top})^{2}X(t)^{2}\] \[+4q(l(t),l(t))^{2}w(l(t),t)^{2} \tag{65}\]
The nonlinear terms can be shown to satisfy the following inequalities
\[|h^{*}(X)| \leq 2k_{n}X^{\top}X, \tag{66}\] \[f(X(t))\leq\kappa X^{\top}X+2k_{m}|X^{\top}X|^{3/2}, \tag{67}\]
where
\[k_{n} =\max\{c_{\infty}K_{+}\lambda_{+}^{2},c_{\infty}K_{-}\lambda_{-}^{ 2}\} \tag{68}\] \[k_{m} =\max\{c_{\infty}K_{+}\lambda_{+}^{3},c_{\infty}K_{-}\lambda_{-}^{ 3}\} \tag{69}\]
by utilizing \(-e^{x}+x+1\leq x^{2}\) for \(x\leq 1.79\). Then, using Young's and Cauchy-Schwarz's inequalities, one can obtain
\[\dot{d}(t)^{2}\leq \rho_{1}d(t)^{2}+\alpha_{1}X(t)^{2}+\alpha_{2}X(t)^{4}+\alpha_{3} w_{x}(0,t)^{2}\] \[+\alpha_{4}||w(x,t)||^{2}+\alpha_{5}w_{x}(l(t),t)^{2}\]
where
\[\rho_{1}=8\dot{\phi}(0)B+\frac{1}{D}H^{\top}B, \tag{70}\]
\[\alpha_{1} =8\left(\left(\dot{\phi}(0)B+\frac{1}{D}H^{\top}B-A\right)\phi(-l(t) )\right)^{2}\] \[+32(C^{\top}B)^{2}(\varphi(-l(t))^{\top})^{2}\] \[+32\left((Dk(0,l(t))+\phi(l(t))B)\right)^{2}(\varphi(0)^{\top})^{2}\] \[+64\left(\phi(-l(t))^{\top}B-ak(0,l(t))\right)^{2}(\varphi(0)^{ \top})^{2}\] \[+12\left(\int_{0}^{l(t)}\zeta(y)^{2}dy\right)\left(\int_{0}^{l(t) }\varphi(x-l(t))^{\top}dx\right)^{2}, \tag{71}\] \[\alpha_{2} =8\left(\kappa^{2}\phi(-l(t))^{2}+\left(r_{8}e_{1}\dot{\phi}(-l( t))^{\top}\right)^{2}\right)\] \[+16\left(\phi(-l(t))^{\top}B-ak(0,l(t))\right)^{2}\] \[+124k_{n}^{2}\left(Dk(0,l(t))+\phi(l(t))B\right)^{2}q(l(t),l(t)) ^{2}\] (72) \[\alpha_{3} =32(C^{\top}B)^{2},\] (73) \[\alpha_{4} =18\int_{0}^{l(t)}\zeta(y)^{2}dy\] \[\times\left(1+\left(\int_{0}^{l(t)}\int_{x}^{l(t)}q(x,y)^{2}dydx \right)^{1/2}\right)^{2}\] \[+32|C^{\top}B)|^{2}\int_{0}^{l(t)}q_{x}(0,y)^{2}dy,\] (74) \[\alpha_{5} =32\left((Dk(0,l(t))+\phi(l(t))B)\right)^{2} \tag{75}\]
where
\[\zeta(y)= Dk_{yy}(0,y)+ak_{y}(0,y)-\left(g+\dot{\phi}(0)B+\frac{1}{D}H^{\top}B \right)k(0,y) \tag{76}\]
## V Main Results
In this section, we present the analysis for the avoidance of Zeno behavior and closed-loop system stability.
### _Avoidance of Zeno Behavior_
The event-triggering mechanism dictates when to sample the continuous-time control signal, reducing computational and communication complexity. However, defining these sampling times is challenging due to the potential for Zeno behavior, where specific instances may result in infinite triggering within finite time intervals. This limitation restricts the mechanism's applicability. To address this, we prove the existence of a minimum dwell-time in the following theorem.
**Theorem 1**.: _Consider the closed-loop system of (1)-(5) incorporating the control law given by (44) and the triggering mechanism in Definition 1. There exists a minimum dwell-time denoted as \(\tau\) between two consecutive triggering times \(t_{j}\) and \(t_{j+1}\), satisfying \(t_{j+1}-t_{j}\geq\tau\) for all \(j\in\mathbb{N}\) when \(\beta_{i}\) is selected as follows:_
\[\beta_{i}=\frac{\alpha_{i}}{\gamma(1-\sigma)} \tag{77}\]
_where \(\sigma\in(0,1)\), \(i=\{1,...,5\}\) and the values of \(\alpha_{i}\) are provided in equations (71)-(75)._
Proof.: By using Lemma 1, we define the continuous function \(\psi(t)\) in \([t_{j},t_{j+1})\) to derive the lower bound between interexcution times as follows:
\[\psi(t):=\frac{d^{2}(t)+\gamma(1-\sigma)m(t)}{-\gamma\sigma m(t)} \tag{78}\]
As described in [11], one can show that
\[\dot{m}(t)= -\eta m(t)+\rho d(t)^{2}-\beta_{1}X(t)^{2}-\beta_{2}X(t)^{4}\] \[-\beta_{3}|w_{x}(0,t)|^{2}-\beta_{4}||w||^{2}-\beta_{5}|w_{x}(l( t),t)|^{2} \tag{79}\]
Then, one can obtain the estimate in (59). Now, taking the time derivative of (78) and using (59), we can choose \(\beta_{i}\) as described in (77). Thus, we get \(\dot{\psi}(t)\leq a_{1}\psi(t)^{2}+a_{2}\psi(t)+a_{3}\), where
\[a_{1} =\rho\sigma\gamma>0, \tag{80}\] \[a_{2} =1+2\rho_{1}+(1-\sigma)\rho+\eta>0,\] (81) \[a_{3} =(1+\rho_{1}+\gamma(1-\sigma)\rho+\eta)\frac{1-\sigma}{\sigma}>0. \tag{82}\]
Using the comparison principle and the argument in [11], one can prove that there exists a time minimum dwell-time \(\tau\) as follows:
\[\tau=\int_{0}^{l(t)}\frac{1}{a_{1}s^{2}+a_{2}s+a_{3}}ds \tag{83}\]
which completes the proof.
### _Stability Analysis_
In this section, we initially introduce the main theorem, which establishes stability.
**Theorem 2**.: _Consider the closed-loop system comprising the plant described by (1)-(5) along with the control law specified by (44) and employing an event-triggering mechanism that is defined in Definition 1. Let_
\[\gamma>\frac{16(\alpha_{3}+\alpha_{5})}{D(1-\sigma)},\ \rho=16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{16g}{D}+\frac{16}{D}\rho_{1} \tag{84}\]
_and \(\eta>0\) be design parameters, \(\sigma\in(0,1)\) while \(\beta_{i}\) for \(i=\{1,2,3,4,5\}\) are chosen as in (71)-(75). Then, there exist constants \(M>0\), \(c>0\) and \(\Gamma\), such that, if initial conditions is such that \(Z(0)<M\) then the following norm estimate is satisfied:_
\[Z(t)\leq cZ(0)exp(-\Gamma t), \tag{85}\]
_for all \(t\geq 0\), in \(\mathcal{H}_{1}\)-norm \(Z(t)=||u(.,t)||_{\mathcal{H}_{1}(0,l(t))}^{2}+X^{\top}X\) which establishes the local exponential stability of the origin of the closed-loop system._
To establish local stability on a non-constant spatial interval, we rely on two assumptions derived in [4], which are as follows:
\[0<l(t)\leq\bar{l},\quad|\dot{l}(t)|\leq\bar{v}, \tag{86}\]
for some \(\bar{l}>l_{\mathrm{s}}>0\) and \(\bar{v}>0\). Then, we consider the following Lyapunov functionals
\[V_{1}= \frac{1}{2}||w||^{2}:=\frac{1}{2}\int_{0}^{l(t)}w(x,t)^{2}dx, \tag{87}\] \[V_{2}= \frac{1}{2}||w_{x}||^{2}:=\frac{1}{2}\int_{0}^{l(t)}w_{x}(x,t)^{2}dx.\] (88) \[V_{3}= X(t)^{\top}PX(t), \tag{89}\]
where \(P>0\) is a positive definite matrix that satisfies the Lyapunov equation: \((A+BK^{\top})^{\top}P+P(A+BK^{\top})=-Q\)
for some positive definite matrix \(Q>0\). We define the total Lyapunov function as follows:
\[V(t)=d_{1}V_{1}(t)+V_{2}(t)+d_{2}V_{3}(t)-m(t), \tag{90}\]
where \(d_{1}>0\) and \(d_{2}>0\) are parameters to be determined.
**Lemma 3**.: _Assume that the conditions in (86) are satisfied with \(\bar{v}=\frac{D}{16(D+1)}\), for all \(t\geq 0\). Then, for sufficiently large \(d_{1}>0\) and sufficiently small \(d_{2}<0\), there exist positive constants \(\xi_{i}\) for \(i=\{1,2,3,4,5\}\) such that the following norm estimate holds for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\):_
\[\dot{V}\leq-\alpha^{*}V+\left(\sum_{i=1}^{5}\xi_{i}V^{(1+\frac{i}{2})}\right) \tag{91}\]
_where \(\alpha^{*}=\min\left\{g+\frac{d_{1}D}{2},\frac{d_{1}g}{4},\frac{d_{2}\lambda_{ \min}(Q)}{4},\eta\right\}\)._
Proof.: By taking the time derivative of the Lyapunov functional (87)-(89) along the target system and substituting boundary conditions for \(t\in(t_{j}+t_{j+1})\), \(j\in\mathbb{N}\), and applying Poincare's, Agmon's, and Young's inequalities, (84), along with (57) and (93)-(94), the expression for (90) can be transformed into:
\[\dot{V}\leq -\alpha^{*}V+\left(16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{8g^{2}}{D} \right)h^{*}(X(t))^{2}\] \[+\frac{d_{1}}{2}\dot{l}(t)h^{*}(X(t))^{2}+\Xi_{1}(X^{\top}X)^{2}+ \Xi_{2}(X^{\top}X)^{3}\] \[+d_{1}\dot{l}(t)\int_{0}^{l(t)}F(x,X(t))w(x,t)dx\] \[+\frac{\big{|}\dot{l}(t)\big{|}}{2}F(l(t),X(t))^{2}+\frac{\big{|} \dot{l}(t)\big{|}}{2}F(0,X(t))^{2}\] \[+\dot{l}(t)\int_{0}^{l(t)}F_{x}(x,X(t))w_{x}(x,t)dx\] \[+d_{2}4k_{m}|P||X^{\top}X|^{5/2} \tag{92}\]
where we use the time derivatives of boundary conditions as
\[w_{t}(0,t) =\dot{d}(t), \tag{93}\] \[w_{t}(l(t),t) =\dot{X}(t)\dot{h}^{*}(X(t))-\dot{l}(t)w_{x}(l(t),t). \tag{94}\]
and the positive constants are
\[\int_{0}^{l(t)}\left(\phi(x-l(t))^{\top}\right)^{2}dx\leq L_{n_{ 2}}, \tag{95}\] \[\int_{0}^{l(t)}\left(\phi^{\prime}(x-l(t))^{\top}B+\frac{a}{D} \phi(x-l(t))^{\top}B\right)^{2}dx\leq L_{n_{3}}\] (96) \[\Xi_{1}= L_{n_{2}}\kappa^{2}\left(\frac{2}{D}+\frac{3d_{1}}{2g}\right)+L _{n_{3}}\left(\frac{8k_{n}^{2}}{D}+\frac{2d_{1}}{3g}\right)+\frac{D\alpha_{2} }{16\alpha_{5}}\] \[+\beta_{2}+2d_{2}\kappa|P|+\frac{3Dc_{\infty}^{2}r_{\rm g}^{2}}{1 6}(K_{+}^{2}\lambda_{+}^{4}+K_{-}^{2}\lambda_{-}^{4})\] (97) \[\Xi_{2}= \frac{2}{D}L_{n_{2}}ak_{m}^{2}|P|^{2}+d_{1}\frac{1}{2g}L_{n_{2}} 4k_{m}^{2}|P|^{2}\] \[+\frac{3D}{32}\left(c_{\infty}r_{\rm g}(K_{+}\lambda_{+}^{3}+K_{ -}\lambda_{-}^{3})\right)^{2} \tag{98}\]
We choose the constants \(d_{1}\) and \(d_{2}\) to satisfy
\[d_{1}\geq\frac{4a^{2}}{D^{2}}+\frac{1+\bar{l}}{\bar{l}}+\frac{ 4\beta_{4}}{g}+\frac{D\alpha_{4}}{4g\alpha_{5}},\ d_{2}<\frac{D\lambda_{\min}( Q)}{4|B^{\top}P|^{2}}. \tag{99}\]
The surplus nonlinear terms in (92) can be bounded by quadratic norms of the ODE state as. Specifically, positive constants \(L_{1},\ L_{2},\ L_{3}\), and \(L_{4}\) satisfy: \(F(0,X(t))^{2}\leq L_{1}X^{\top}X\), \(F(l(t),X(t))^{2}\leq L_{2}X^{\top}X\), \(\int_{0}^{l(t)}F_{x}(x,X(t))^{2}dx\leq L_{3}X^{\top}X\), \(\int_{0}^{l(t)}F(x,X(t))^{2}dx\leq L_{4}X^{\top}X\), Furthermore, using inequality (66), taking into account \(\dot{l}(t)=r_{\rm g}e_{2}^{\top}X\), and applying Poincare's and Young's inequalities, we can derive:
\[\dot{V}\leq-\alpha^{*}V+\xi_{1}V^{3/2}+\xi_{2}V^{2}+\xi_{3}V^{5/2}+ \xi_{4}V^{3}+\xi_{5}V^{7/2} \tag{100}\]
where
\[\xi_{1} =\frac{r_{\rm g}}{d_{2}\lambda_{min}(P)^{1/2}}\max\left\{2,\frac{ L_{1}+L_{2}+L_{3}+d_{1}L_{4}}{d_{2}\lambda_{min}(P)}\right\} \tag{101}\] \[\xi_{2} =\frac{1}{d_{2}\lambda_{\min}(P)^{2}}\left(\Xi_{1}+\kappa^{2} \left(16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{8g^{2}}{D}\right)\right)\] (102) \[\xi_{3} =\frac{d_{1}r_{\rm g}\kappa^{2}+8d_{2}k_{m}|P|}{2d_{2}\lambda_{ \min}(P)^{5/2}}\] (103) \[\xi_{4} =\frac{\left(16Dd_{1}^{2}+\frac{ad_{1}}{2}+\frac{8g^{2}}{D} \right)4k_{m}^{2}}{d_{2}\lambda_{\min}(P)^{3}}+\frac{\Xi_{2}}{d_{2}\lambda_{ \min}(P)^{3}}\] (104) \[\xi_{5} =\frac{d_{1}r_{\rm g}4k_{m}^{2}}{2d_{2}\lambda_{\min}(P)^{(7/2)}} \tag{105}\]
which completes the proof of Lemma 3.
In this next section, we ensure the local stability of the closed-loop system with the event-triggering mechanism.
**Lemma 4**.: _In the region \(\Omega_{1}:=\{(w,X)\in\mathcal{H}_{1}\times\mathbb{R}^{2}|V(t)<M_{0}\}\) where \(t\in(t_{j},t_{j+1})\)\(j\in\mathbb{N}\), there exists a positive constant \(M_{0}>0\) such that the conditions in (86) are satisfied._
Proof.: See the proof of Lemma 2 in [4].
From the proof of Lemma 4, we have \(M_{0}=\frac{\lambda_{\min}(P)}{d_{2}}r^{2}\) for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\). Next, we analyze stability within the time interval \(t\in(t_{j},t_{j+1})\) for \(j\in\mathbb{N}\), and subsequently for \(t\in(0,t)\). Within this interval, we establish the following lemma:
**Lemma 5**.: _There exists a positive constant \(M_{j}\) such that if \(V(t_{j})<M_{j}\) then the following norm estimate holds for \(t\in(t_{j},t_{j+1})\), where \(j\in\mathbb{N}\):_
\[V(t_{j+1})\leq V(t_{j})e^{-\frac{\alpha^{*}}{2}(t_{j+1}-t_{j})} \tag{106}\]
Proof.: For \(M_{j}>0\), we easily demonstrate that \(M_{j}<M_{0}\) using Lemma 4, ensuring the norm estimate from Lemma 3 holds. Thus, we set \(M_{j}\leq p^{*}\), where \(p^{*}\) is a non-zero root of the polynomial for \(V>0\).
\[-\alpha^{*}V+\xi_{1}V^{3/2}+\xi_{2}V^{2}+\xi_{3}V^{5/2}+\xi_{4}V^{3}+\xi_{5}V^ {7/2}=0 \tag{107}\]
Since \(\alpha^{*}\), and \(\xi_{i}\) are all positive, at least one positive root exists for the polynomial in (107). Therefore, (100) implies
\[\dot{V}(t)\leq-\frac{\alpha^{*}}{2}V(t) \tag{108}\]
for \(t\in(t_{j},t_{j+1})\), \(j\in\mathbb{N}\) where \(M_{j}=\min\left\{M_{0},p^{*}\right\}\). The
and \(V(t_{j}^{+})=V(t_{j})\) where \(t_{j}^{+}\) and \(t_{j}^{-}\) are right and left limits of \(t=t_{j}\), respectively. Thus, we have
\[V(t_{j+1})\leq\exp(-\alpha^{*}(t_{j+1}-t_{j}))V(t_{j}) \tag{109}\]
which completes the proof of Lemma 5.
For any \(t\geq 0\) in \(t\in[t_{j},t_{j+1})\), \(j\in\mathbb{N}\), we obtain
\[V(t)\leq e^{-\alpha^{*}(t-t_{j})}V(t_{j})\leq e^{-\alpha^{*}t}V(0) \tag{110}\]
Recalling \(m(t)<0\) and (92), we have
\[d_{1}V_{1}(t)+V_{2}(t)+d_{2}V_{3}(t)\leq e^{-\alpha^{*}t}V(0) \tag{111}\]
which means that
\[d_{1}\frac{1}{2}\|w\|^{2}+\frac{1}{2}\|w_{x}\|^{2}+d_{2}X(t)^{ \top}PX(t)\] \[\leq e^{-\alpha^{*}t}\left(\frac{d_{1}}{2}\|w(0)\|^{2}+\frac{1}{2}\|w_{ x}(0)\|^{2}+d_{2}X(0)^{\top}PX(0)-m(0)\right)\]
Therefore, we can derive the following norm estimate:
\[\|w\|^{2}+\|w_{x}\|^{2}+X(t)^{\top}X(t)\] \[\leq e^{-\alpha^{*}t}\sqrt{\frac{\frac{d_{1}}{2}\|w(0)\|^{2}+\frac{1}{2 }\|w_{0}(0)\|^{2}+d_{2}X(0)^{\top}PX(0)-m(0)}{\min\left\{\frac{d_{1}}{2},\frac{ 1}{2},\frac{d_{2}}{\lambda_{\max}(P)}\right\}}}.\]
This confirms the local exponential stability of the system (50)-(53) in the \(\mathcal{H}_{1}\)-norm. By exploiting the backstepping transformation's invertibility and norm equivalence, we also establish the local exponential stability of the original system (1)-(5) in the \(\mathcal{H}_{1}\) norm, concluding the proof of Theorem 2.
## VI Numerical Simulations
In this section, we conduct a numerical analysis of the plant dynamics (equations (1) to (5) utilizing the control law (defined by (44)) and incorporating the event triggering mechanism (as defined in Definition 1). The model employs biological constants and control parameters from Table 1, with initial conditions set to \(c_{0}(x)=2c_{\infty}\) for the tubulin concentration along the axon and \(l_{0}=1\mu m\) for the initial axon length. The control gain parameters are chosen as \(k_{1}=-0.001\) and \(k_{2}=4\times 10^{13}\). The event-triggering mechanism parameters are set as follows: \(m(0)=-0.5\), \(\beta_{1}=1.634\times 10^{22}\), \(\beta_{2}=5.229\times 10^{12}\), \(\beta_{3}=6.569\times 10^{-14}\), \(\beta_{4}=2.614\times 10^{13}\), \(\beta_{5}=2.94\times 10^{-12}\), \(\rho=4\times 10^{22}\), \(\eta=100\) and \(\sigma=0.5\). In Fig. (a)a and (b)b, we present the evolution of tubulin concentration along the axon for both continuous-time control law and event-triggered control.
Fig. 2 shows axon growth convergence under continuous-time and event-triggered control laws. Both methods achieve the desired \(12\mu m\) length from an initial \(1\mu m\) in about 3.5 minutes. In Fig. (a)a, we compare ETC control inputs for \(\eta=1\) and \(\eta=1000\) with \(\sigma=0.5\), showing similar convergence rates. Notably, the ETC controller with \(\eta=1000\) samples faster. In Fig. (b)b, fixing \(\eta\) at \(100\) and varying \(\sigma\) to \(0.8\) results in faster and more frequent sampling, leading to quicker convergence compared to \(\sigma=0.1\).
## VII Conclusion
This paper explores a dynamic event-triggering boundary control approach for axonal growth modeling. It addresses the avoidance of Zeno behavior and offers a local stability analysis of the closed-loop system. Future research will focus on investigating periodic event-triggering and self-triggering boundary control methods, which are more suitable for digital implementations.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Parameter & Value & Parameter & Value \\ \hline \(D\) & \(10\times 10^{-12}m^{2}/s\) & \(\bar{r}_{8}\) & \(0.053\) \\ \(a\) & \(1\times 10^{-8}m/s\) & \(\gamma\) & \(10^{4}\) \\ \(g\) & \(5\times 10^{-7}\) & \(s^{-1}\) & \(l_{c}\) & \(4\mu m\) \\ \(r_{8}\) & \(1.783\times 10^{-5}\) & \(m^{4}/(mols)\) & \(l_{s}\) & \(12\mu m\) \\ \(c_{\infty}\) & \(0.0119\)\(mol/m^{3}\) & \(l_{0}\) & \(1\mu m\) \\ \hline \end{tabular}
\end{table} TABLE I: Biological constants and control parameters
Fig. 3: The closed-loop response of the continuous-time and event-triggered control law for \(l_{s}=12\mu m\)
Fig. 2: The closed-loop response of the designed full-state feedback control system for continuous-time and event-triggered control law. |
2309.15339 | Detecting quantum phase transitions in a frustrated spin chain via
transfer learning of a quantum classifier algorithm | The classification of phases and the detection of phase transitions are
central and challenging tasks in diverse fields. Within physics, it relies on
the identification of order parameters and the analysis of singularities in the
free energy and its derivatives. Here, we propose an alternative framework to
identify quantum phase transitions. Using the axial next-nearest neighbor Ising
(ANNNI) model as a benchmark, we show how machine learning can detect three
phases (ferromagnetic, paramagnetic, and a cluster of the antiphase with the
floating phase). Employing supervised learning, we demonstrate the feasibility
of transfer learning. Specifically, a machine trained only with
nearest-neighbor interactions can learn to identify a new type of phase
occurring when next-nearest-neighbor interactions are introduced. We also
compare the performance of common classical machine learning methods with a
version of the quantum nearest neighbors (QNN) algorithm. | André J. Ferreira-Martins, Leandro Silva, Alberto Palhares, Rodrigo Pereira, Diogo O. Soares-Pinto, Rafael Chaves, Askery Canabarro | 2023-09-27T01:11:11Z | http://arxiv.org/abs/2309.15339v1 | Detecting quantum phase transitions in a frustrated spin chain via transfer learning of a quantum classifier algorithm
###### Abstract
The classification of phases and the detection of phase transitions are central and challenging tasks in diverse fields. Within physics, it relies on the identification of order parameters and the analysis of singularities in the free energy and its derivatives. Here, we propose an alternative framework to identify quantum phase transitions. Using the axial next-nearest neighbor Ising (ANNNI) model as a benchmark, we show how machine learning can detect three phases (ferromagnetic, paramagnetic, and a cluster of the antiphase with the floating phase). Employing supervised learning, we demonstrate the feasibility of transfer learning. Specifically, a machine trained only with nearest-neighbor interactions can learn to identify a new type of phase occurring when next-nearest-neighbor interactions are introduced. We also compare the performance of common classical machine learning methods with a version of the quantum nearest neighbors (QNN) algorithm.
## I Introduction
Machine Learning (ML) has proven its efficiency and success in many scientific as well as business sectors [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In essence, we can teach computers to see patterns by progressively exposing them to quality inputs, which is crucial for data-driven solutions given the gigantic and ever-increasing amount of raw data. Within any branch of ML, substantial improvements in state-of-the-art solutions are strongly related to algorithmic and hardware advances. And, although we still need to be careful about setting long-term expectations, recent breakthroughs in the current noisy intermediate-scale quantum (NISQ) era [20; 21; 22; 23] put quantum computing among the most promising directions towards significant progress in machine learning. Within this context, there has been a number of different approaches seeking for quantum advantages in machine learning ranging from the quantum analog of neural networks [24], routines for supervised or unsupervised learning [25; 26], quantum reinforcement learning [27] and quantum pattern recognition [28] (see references [29; 30; 31] for a detailed review).
A field where machine learning has been particularly successful is that of quantum matter and quantum information. Classical machine learning techniques were used to perform quantum state tomography [12], to approximate the ground state of many Hamiltonians of interest [11], for the description of causal networks [32; 33; 34] and finding violations of Bell inequalities [17; 35], among many other applications. Importantly, such classical methods have also been proven capable of tackling a central topic in many-body physics, that of classifying phase transitions, a thorny goal especially due to the exponential increase of Hilbert space describing quantum systems. Apart from simple transitions, witnessed by non-analyticities in order parameters, more general quantum phase transitions require large lattice sizes, a costly computational task for which a variety of classical machine learning techniques provide alternative and reliable approaches [14; 15; 16; 36]. It seems thus natural to consider whether quantum machine learning can also identify phase transitions. Indeed, machine learning based on hybrid quantum-classical variational circuits has been shown to detect phase transitions in the simple Hamiltonians, such as the transverse field Ising and XXZ models [37; 38; 39; 40]. Our approach distinguishes itself significantly from others, primarily through the implementation of transfer learning using a quantum classifier algorithm. This algorithm is exclusively trained within a specific segment of the phase diagram while testing on the rest of the phase diagram. We also explore optimized data preprocessing for compatibility with real quantum hardware. This demonstrates the effectiveness of our technique, as discussed in detail herein.
Our aim in this paper is to show that the Quantum Nearest Neighbours (QNN) algorithm [41] also provides a well-founded tool for classifying quantum phase transitions. Moving beyond the
models previously considered, we benchmark the QNN performance by employing the axial next-nearest neighbor Ising (ANNNI) model [42; 43] used, for instance, to investigate the magnetic order in quasi-one-dimensional spin ladder materials [44], quantum melting of crystalline order in Rydberg atom systems [45], interactions between Majorana edge modes in arrays of Kitaev chains [46; 47], and quench dynamics and dynamical phase transitions [48; 49; 50]. The ANNNI is the simplest model combining the effects of quantum fluctuations and frustrated exchange interactions, a combination from which a rich ground state phase diagram arises [51; 52; 53; 54; 55; 56]. It thus provides an interesting challenge to the QNN algorithm capabilities.
Importantly, even though the input data to the quantum algorithm is considerably small, the full 198 raw pairwise correlation functions between the spins for a lattice with 12 sites, i.e., an input array of 198 real features. And even if each of these variables is mapped to just one bit, we would still require a large amount of qubits. As better detailed ahead, to be implemented, the QNN algorithm that we use requires \(2n+2\) qubits for a \(n\)-sized input vector. In order to make the computational simulation as well as its implementation in a real quantum computer feasible, we first proceed with a pre-processing of the data, consisting of a feature selection followed by a discretization and a final one-hot encoding step. With that, we reduce to 4 the number of features that in turn require 10 qubits to be analyzed by the quantum algorithm.
As we show, even after significantly reducing the input data, to make it compatible with quantum computational requirements, the QNN algorithm still allows for a successful learning of phase transitions. More precisely, we demonstrate the transfer learning in the ANNNI model, as by training the machine with nearest-neighbour interactions only, it also accurately predicts the phase transitions happening at regions including next-nearest-neighbor interactions. Interestingly, the QNN performs better than its classical counterpart, called K-Nearest Neighbors (KNN), when exposed to the same input data, thus providing a proof-of-principle example of a possible quantum advantage in accuracy.
The paper is organized as follows. In Sec. II we describe the ANNNI model. In Sec. III we provide a succinct but comprehensive overview of classification problems in machine learning, also describing the KNN and QNN algorithms. In Sec. IV we detail the data pre-processing required to make the problem amenable to be implemented in near-term quantum devices. In Sec. V we present our results regarding the learning of phase transitions in the ANNNI model. In Sec. VI we discuss our results and point out interesting directions for future research. Finally, in the Appendix we provide technical details about some of the classical machine learning techniques we have employed in the pre-processing of the data.
## II The ANNNI model
With the goal of analyzing the use of a quantum classifier algorithm to witness phase transitions in a quantum many-body system, we chose the axial next nearest-neighbor Ising (ANNNI) model. The reason stems from the fact that this model displays a non-trivial and rich phase diagram. As it happens, ANNNI is the simplest model combining quantum fluctuations and competing frustrated exchange interactions. The first is induced by the presence of a transverse field while the latter is due to the fact that even though the interaction is ferromagnetic for nearest neighbors, it becomes antiferromagnetic for next-nearest neighbors.
The Hamiltonian for the ANNNI model is given by [42; 43]
\[H=-J\sum_{j=1}^{N}\left(\sigma_{j}^{z}\sigma_{j+1}^{z}-\kappa\sigma_{j}^{z} \sigma_{j+2}^{z}+g\sigma_{j}^{x}\right), \tag{1}\]
where \(\sigma_{j}^{\kappa}\) (\(\alpha=x,y,z\)), are Pauli matrices acting on spin-1/2 degrees of freedom at site \(j\) of a one-dimensional lattice with \(N\) sites and periodic boundary conditions. The parameter \(J>0\) is a coupling constant that sets the energy scale of the problem (we set \(J=1\)) and is associated with the nearest-neighbor ferromagnetic exchange interaction. The dimensionless coupling constants \(\kappa\) and \(g\) are related to the next-nearest-neighbor interaction and the transverse magnetic field, respectively.
The groundstate phase diagram of the ANNNI model is well understood and known to contain four phases separated by three quantum phase transitions: ferromagnetic, antiphase, paramagnetic, and floating phase. In a nutshell, the ferromagnetic phase is characterized by a uniform spontaneous magnetization, with one of the ground states given by \(\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\). In turn, the antiphase breaks the lattice translational symmetry and has long-range order with a four-site periodicity of the form \(\uparrow\uparrow\downarrow\downarrow\uparrow\uparrow\downarrow\downarrow\). Distinctively, the paramagnetic phase is disordered and has a unique ground state with spins pointing predominantly along the field direction. Finally, the floating phase is gapless with correlation functions decaying as a power-law for large distances, in contrast with the other phases that have a finite energy gap and exponentially decaying correlations.
For \(\kappa=0\), the transverse field Ising model is reproduced, exactly solvable via the mapping to non-interacting spinless fermions. Along the \(\kappa=0\) line,
a second-order phase transition occurs at \(g=1\), separating the ferromagnetic phase at \(g<1\) from the paramagnetic phase at \(g>1\). In particular, exactly at the critical point \(g=1\), the energy gap vanishes.
For \(g=0\), there is a transition between the ferromagnetic phase at small \(\kappa\) and the antiphase at large \(\kappa\) occurring at \(\kappa=1/2\). Notice that with \(g=0\) the model becomes classical, since all operators in the Hamiltonian commute with each other. At this classical transition point, any configuration that does not have three neighboring spins pointing in the same direction is a ground state, showing that the degenerescence of the ground state increases exponentially with the system size.
For \(g\neq 0\) and \(\kappa\neq 0\), the critical lines have to be determined numerically since the model is not integrable any longer. For \(0\leq\kappa\leq 1/2\), the Ising transition between paramagnetic and the ferromagnetic phases extends from the \(g=1\), \(\kappa=0\) until the degenerate point \(g=0\), \(\kappa=1/2\), a multicritical point at which several transition lines coincide. There are two other transition lines that start at the multicritical point and extend to the high-frustration regime \(\kappa>1/2\). For fixed \(g>0\) and increasing \(\kappa>1/2\), we first encounter a Berezinsky-Kosterlitz-Thouless (BKT) transition from the paramagnetic phase to the floating phase. Detecting the BKT transition is challenging because the correlation length diverges exponentially at the critical point. As we increase \(\kappa\) further, there is a commensurate-incommensurate (CIC) transition from the floating phase to the antiphase. Numerical density matrix renormalization group results for long spin chains [55] show that the floating phase occupies a rather narrow region in the phase diagram, which makes it hard to discern the BKT from the CIC transition for small system sizes.
Using perturbation theory in the regime \(\kappa<1/2\)[43] or by fitting numerical results [55]) in the regime \(\kappa>1/2\), one can obtain approximate expressions for the transition lines. For instance, the critical value of \(g\) for the Ising transition for \(0\leq\kappa\leq 1/2\) is approximately given by [43]
\[g_{\rm I}(\kappa)\approx\frac{1-\kappa}{\kappa}\left(1-\sqrt{\frac{1-3\kappa+4 \kappa^{2}}{1-\kappa}}\right). \tag{2}\]
In turn, the critical value of \(g\) for the BKT transitions for \(1/2<\kappa\lesssim 3/2\) is approximated by [55]
\[g_{\rm BKT}(\kappa)\ \approx\ 1.05\sqrt{(\kappa-0.5)(\kappa-0.1)}. \tag{3}\]
We use these approximations to make benchmark comparisons to our heuristic results.
### Our dataset
As will be discussed in more detail throughout the paper, we use the pairwise correlations among all spins in the lattice as the training data for the machine learning algorithms. Given \(N\) spins, we have a total of \(3\times C_{2}=\binom{N}{2}\) observables for up to (second) nearest neighbors, hence the combination by (2). Thus, the features are given by \(\left\{\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle,\langle\sigma_{i}^{y} \sigma_{j}^{y}\rangle,\langle\sigma_{i}^{z}\sigma_{j}^{z}\rangle\right\}\) with, \(j>i\) and \(i=[1,N-1]\) where \(\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle=\langle\lambda_{0}|\sigma_{i}^{x }\sigma_{j}^{x}|\lambda_{0}\rangle\) is the expectation value of the spin correlation for the Hamiltonian ground state \(|\lambda_{0}\rangle\) (and similarly for the other expectation values). In our case, we take N = 12, a manageable size for both computational and analytical evaluation of the ground state of the ANNNI Hamiltonian. This allows us to efficiently compute a set of 198 pairwise expectation values, which will serve as the (raw) input features for the machine learning algorithm.
It is worth pointing out that, even if one only has access to short chains, the Ising transition can still be captured correctly [54]. However, detecting the BKT transitions using standard approaches requires computing observables for significantly longer chains [55]. Notwithstanding, as we will see below, even though our data regards a quite short chain \(N=12\), the machine learning algorithms, both classical and quantum, will be able to identify not only the Ising but also the antiphase and the paramagnetic phases, lumping the BKT and CIC transitions together.
## III The quantum nearest neighbors algorithm
The quantum nearest neighbors (QNN) [41] is a quantum classification algorithm that employs the Hamming distance as a distance criterion to compare the training dataset and unclassified observations. Schematically, it consists of three major steps:
* First, create a superposition of the training dataset and the input observation;
* Encode the Hamming distance between the input observation and each example in the training set into the amplitude of each state in the superposition;
* Measure the class-qudit retrieving the predicted class with the highest probability.
Before the actual quantum algorithm starts, an important classical pre-processing step (whose reason will become clear in what follows) must be performed:
the features in the training dataset are represented as bit vectors, so that the feature space becomes \(\mathcal{X}=\{0,1\}^{\otimes n}\). This is achieved via the procedure known as one-hot encoding, which produces the so-called dummy variables [57]. Naturally, such a representation will be discrete (binary, in fact), so that if any original feature is continuous (or even categorical with more than 2 levels), a prior step of discretization is necessary. Notice that the number of binary features after the encoding may be different from the number of original features, although here we represented both by the same number of bits \(n\). There are several ways to perform this binarization process. However, whatever method is chosen, it is important that the essence of the data topology is maintained -- that is, points that are close on the original feature space must remain close on the binarized feature space. In Sec. IV we detail the specifics of the particular procedure we applied to our problem.
Once the training dataset features are binarized, their representation as quantum states is immediate via the basis encoding [58], which accounts for a direct mapping of binary features to the quantum computational-basis states: \(0\mapsto|0\rangle\) and \(1\mapsto|1\rangle\). After these two steps, each training set data point is mapped to the quantum state \(|x_{1}^{p}\cdots x_{n}^{p}\rangle\equiv|\mathbf{x}^{p}\rangle\), \(x_{k}^{p}\in\{0,1\}\), \(p=1,\cdots,N\), where \(N\) is the number of points in the training set. In parallel, in a separate quantum register, we encode the class \(y^{p}\in\{0,\cdots,d-1\}\), and construct, for each observation \(p\), the state
\[|x_{1}^{p}\cdots x_{n}^{p},y^{p}\rangle\equiv|\mathbf{x}^{p},y^{p}\rangle\enspace. \tag{4}\]
If we are dealing with binary classification (which is the case in this work), the respective class is also straightforwardly encoded in a single qubit, as \(0\mapsto|0\rangle\) and \(1\mapsto|1\rangle\). If we have a multiclass problem, qudits are necessary, or one could use more than one qubit to encode integers corresponding to the class (for instance, \(|5\rangle=|101\rangle\)). In this case, \(\lceil\log_{2}d\rceil\) qubits are necessary to encode \(d\) classes.
Once we have the state corresponding to each one of the training states \(|\mathbf{x}^{p},y^{p}\rangle\), we construct a training set superposition of all datapoints, given by
\[|T\rangle=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}|\mathbf{x}^{p},y^{p}\rangle\enspace. \tag{5}\]
Naturally, with \(n\) qubits, one can construct a superposition of \(2^{n}\) states, representing all possible binarized feature vectors of \(n\) features. However, it is possible (and most likely) that in a given training dataset not all possible binary feature vectors will be present. Indeed, in the binarization process, it is likely that multiple observations that are different in the original input space are mapped to the same binary vector so that the transformed training dataset actually has a number of observations quite smaller than the original number of observations (although here we represent both as \(N\)). This leads to important details in the implementation of the algorithm in a practical problem, as it will be detailed in Sec. IV. Further, notice that in the case in which \(N<2^{n}\), the superposition in Eq. (5) will have to be prepared with an arbitrary state preparation routine, which is known to be costly [59]. However, in quantum computing software development kits (SDK) (such as Qiskit [60], which is the one we employ in this work), such a procedure is already implemented and ready to use as a self-contained routine.
The next step is to perform the same classical binarization process with the unclassified input vector \(\mathbf{x}_{\text{in}}\) (the one we wish to classify) and map it to the state \(|x_{\text{in},1}\cdots x_{\text{in},n}\rangle\equiv|\mathbf{x}_{\text{in}}\rangle\), \(x_{\text{in},k}\in\{0,1\}\). Keep this as the first register of the quantum state. Finally, add an ancilla register \(|0\rangle\) as the last register. Such a construction yields an initial state given by
\[|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}|\mathbf{x}_{\text{in}};\mathbf{x}^{ p},y^{p};0\rangle\enspace, \tag{6}\]
which is made up of three registers (or, in fact, blocks of registers): the first containing the input state \(|\mathbf{x}_{\text{in}}\rangle\), which consists of \(n\) qubits; the second containing the superposition \(|T\rangle\) (which is the tensor product of the feature vectors \(|\mathbf{x}^{p}\rangle\) and the class vectors \(|y^{p}\rangle\)), thus consisting of \(n+1\) qubits, and given that we have a binary classification problem, the third contains an ancilla qubit initialized as \(|0\rangle\). Therefore, the number of qubits necessary for the algorithm is precisely \(2n+2\).
Once the initial state is prepared, we put the ancilla into a superposition, by applying a Hadamard gate to the last register, i.e., \(H=1\otimes 1\otimes 1\otimes H\), such that
\[|\psi_{1}\rangle=H\,|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}|\mathbf{x}_ {\text{in}};\mathbf{x}^{p},y^{p}\rangle\otimes\frac{1}{\sqrt{2}}(|0\rangle+|1 \rangle)\enspace. \tag{7}\]
In the next step, we want the Hamming distance components \(d_{k}^{i}\) between each qubit of the first (input) and second (training) register to replace the qubits in the second register, such that
\[d_{k}^{i}=\begin{cases}0,&\text{if }\,|x_{k}^{p}\rangle=|x_{\text{in},k} \rangle\\ 1,&\text{else}.\end{cases}\enspace. \tag{8}\]
This is achieved by simply applying a \(\text{cNOT}(x_{\text{in},k},x_{k}^{p})\)-gate, which overwrites the entry \(x_{k}^{p}\) in the second register with \(0\) if \(x_{k}^{p}=x_{\text{in},k}\), otherwise with \(1\):
\[\begin{cases}\text{cNOT}\,|00\rangle=|00\rangle\enspace;&\text{cNOT}\,|01 \rangle=|01\rangle\\ \text{cNOT}\,|11\rangle=|10\rangle\enspace;&\text{cNOT}\,|10\rangle=|11\rangle \end{cases}\enspace. \tag{9}\]
Thus, after this step, the state is then
\[\begin{split}\ket{\psi_{2}}&=\bigotimes_{k=1}^{n}c\text{NOT }(x_{k},v_{k}^{p})\;\ket{\psi_{1}}\\ &=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}\ket{\mathbf{x}_{\text{in}};\mathbf{d}^ {p},y^{p}}\otimes\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\;,\end{split} \tag{10}\]
where the Hamming distance components \(\ket{d_{1}^{p}\cdots d_{n}^{p}}\equiv\ket{\mathbf{d}^{p}}\), \(d_{k}^{p}\in\{0,1\}\), \(p=1,\cdots,N\) are now in the second register.
In the third step, we apply the unitary operator
\[U=e^{-i\frac{\pi}{2n}\mathcal{O}}\;;\mathcal{O}=1\otimes\sum_{k=1}^{n}\left( \frac{1-\sigma_{z}}{2}\right)_{d_{k}}\otimes 1\otimes\sigma_{z}\;\;. \tag{11}\]
This sums the Hamming distance components \(\{d_{k}^{p}\}\) (thus yielding the actual Hamming distance) between \(\ket{\mathbf{x}^{p}}\) and \(\ket{\mathbf{x}_{\text{in}}}\), \(d_{H}(\mathbf{x}_{\text{in}},\mathbf{x}^{p})\equiv d_{H}\), into the phase of the \(p^{\text{th}}\) state of the superposition. Notice that a relative phase is added, conditioned on the ancilla state. After this step, the state becomes
\[\ket{\psi_{3}}=U\ket{\psi_{2}}=\frac{1}{\sqrt{2N}}\sum_{p=1}^{N}\left(e^{-i \frac{\pi}{2n}d_{H}}\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};0}+e^{i\frac{\pi} {2n}d_{H}}\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};1}\right)\;. \tag{12}\]
Now we apply another Hadamard to the ancilla. This will generate alternating-sign exponentials associated with each ancilla state, which are easily aggregated into a sine and cosine. The resulting state can be expressed as
\[\ket{\psi_{4}}=H\ket{\psi_{3}}=\frac{1}{\sqrt{N}}\sum_{p=1}^{N}\left(\cos\left( \frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};0}+\sin \left(\frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{p},y^{p};1} \right)\;. \tag{13}\]
Notice that \(0\leq d_{H}\leq n\Rightarrow 0\leq\frac{\pi d_{H}}{2n}\leq\frac{\pi}{2}\). Therefore,
* For large \(d_{H}\), \(\cos\left(\frac{\pi d_{H}}{2n}\right)\to 0\) and \(\sin\left(\frac{\pi d_{H}}{2n}\right)\to 1\), so that we have higher probability of measuring \(\ket{1}\) in the ancilla qubit;
* For small \(d_{H}\), \(\cos\left(\frac{\pi d_{H}}{2n}\right)\to 1\) and \(\sin\left(\frac{\pi d_{H}}{2n}\right)\to 0\), so that we have higher probability of measuring \(\ket{0}\).
That is, if the input is far away from most training observations, we have a higher probability of measuring the ancilla in the state \(\ket{1}\); and if the input is close to many observations, the ancilla is more likely to be measured in \(\ket{0}\). Thus, intuitively, since our criterion for classification is to consider the closest observations, by measuring the ancilla in \(\ket{0}\), the amplitudes of close observations will be large, whilst the opposite is true for distant observations. The importance of this fact becomes clear if we rewrite \(\ket{\psi_{4}}\), to show that the different classes appear weighted by their member's distance to the input, such that
\[\ket{\psi_{4}}=\frac{1}{\sqrt{N}}\sum_{y=0}^{d-1}\ket{y}\otimes\sum_{l\in y} \left(\cos\left(\frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{l}; 0}+\sin\left(\frac{\pi d_{H}}{2n}\right)\ket{\mathbf{x}_{\text{in}};\mathbf{d}^{l};1} \right)\;, \tag{14}\]
where \(l\) runs over all training vectors classified with the label \(y\). Written like this, it becomes clear that, if the ancilla is measured to be in \(\ket{0}\), the amplitudes of close observations will be large, which implies that the probability of measuring the respective class qubit of these observations will also be large. And, as we discuss next, this is how the final classification is performed.
As the final step, the ancilla of the state \(\ket{\psi_{4}}\) is measured on the computational basis. According to Eq. (13), it is easy to see that the probability of measuring
\(|0\rangle\) is
\[P(|0\rangle_{a})=|\langle 0|\psi_{4}\rangle|^{2}=\frac{1}{N}\sum_{p=1}^{N}\cos^{ 2}\left(\frac{\pi d_{H}}{2n}\right). \tag{15}\]
The conditional probability to measure a certain class \(y\in\{1,...,d\}\), given that we previously measured the ancilla in \(|0\rangle\) (and, therefore, the state collapsed to \(|\tilde{\psi}_{4}\rangle=\langle 0|\psi_{4}\rangle\)\(|0\rangle\)) is, in terms of the joint probability,
\[\begin{split} P(y\mid|0\rangle_{a})&=P(y)P(|0 \rangle_{a})\\ &=|\langle y|\tilde{\psi}_{4}\rangle|^{2}\\ &=\frac{1}{N}\sum_{l\in y}\cos^{2}\left(\frac{\pi d_{H}}{2n} \right)\,\end{split} \tag{16}\]
which is easily verifiable using Eq. (14). Indeed, Eq. (16) implies that
\[P(y)=\frac{1}{P(|0\rangle_{a})}\frac{1}{N}\sum_{l\in y}\cos^{2}\left(\frac{\pi d _{H}}{2n}\right). \tag{17}\]
Thus, the class measured with the highest probability is that whose members are the closest to the input vector, provided that \(P(c)\) is only computed after the ancilla is measured in \(|0\rangle\), which is precisely why the amplitudes associated to the closest neighbors are considered. Notice that if the measurement returns \(|1\rangle\), this run of the algorithm is not taken into account.
In Fig. 6 the full quantum circuit is illustrated for a particular dataset, as detailed in Appendix VI.4. The algorithm uses \(\mathcal{O}(Nn)\)[41] gates, which is completely due to the construction of the training data superposition (described by Eq. 5), which depends on the number of training samples, thus yielding a \(\mathcal{O}(Nn)\) complexity [41, 61], which close to the classical KNN algorithm complexity, in which we have to compute the distance between the test observation and all other \(N\) training points. However, if one can find a procedure to prepare the training data superposition in a manner independent of the number of samples (perhaps by reading quantum data [62, 63, 64, 65]), the QNN algorithms would run in \(\mathcal{O}(n)\), offering a potentially large advantage over the classical KNN, for which it seems unlikely to exist an algorithm which is independent of the number of training samples -- that's a quite remarkable advantage achieved by the exploitation of the superposition in a quantum algorithm.
We highlight that, in contrast to the classical KNN, the QNN algorithm does not depend on the hyperparameter \(k\). In fact, a superposition of all members of each class is taken into consideration for the classification. This is equivalent to considering all neighbors (that is, \(k=N\)), which in the classical algorithm is associated with a high bias, since, if the dataset is imbalanced with respect to the target, the majority class will always be predicted. In the quantum algorithm, however, this is not the case: even if the dataset is imbalanced, the input observation will be assigned to the class which is the closest to it, since, as it is clear in Eq. (17), the distance of the input to the members of the class explicitly affects the probability.
As a final remark, notice that the probability distribution in Eq. (17) is precisely what is recovered from multiple runs of the algorithm on an actual hardware (or a simulator thereof). The final prediction, as a class, is therefore recovered by choosing the class with the largest observed probability. However, as explained in Sec. IV and illustrated in Fig. (3), in this work we directly use the class probability, i.e., Eq. (17) itself. Fortunately, in contrast to many classical learning models, outputting class probabilities is the most natural choice for the QNN algorithm.
We remark that the Python/Qiskit implementation of the algorithm described above, as well as all the data used in this paper and the corresponding results, are available in an online repository [66].
## IV Data Pre-Processing
As described in Sec. III, the classical data loaded into the quantum registers, via the basis encoding strategy, must be in a binary representation. On the other hand, as discussed in Sec. II.1, the dataset under study consists of 198 continuous features: the pairwise correlations among all spins in a lattice with 12 sites considering boundary conditions and the symmetry it implies. Thus, in order to represent each observation as a binary vector, we must first discretize the continuous features, so that the discrete levels may then be binarized.
Before proceeding with these procedures, however, an important remark is due. As discussed above, the QNN algorithm uses \(2n+2\) qubits, where \(n\) is the number of binarized features. Indeed, this implies that if one wants to simulate the circuit, or even execute it on NISQ hardware, \(n\) must be chosen accordingly, to make the execution/simulation feasible.
As will be detailed below, we employed an efficient discretization and binarization procedure that maps each original continuous feature to only two bits. However, given that we start with 198 features, this would imply \(n=396\), thus requiring a circuit consisting of 794 qubits, which is way beyond the limit for a classical simulation (recall that the algorithm includes entanglement) as well as current quantum hardware capabilities, both in terms of number of qubits as well as circuit depth. And this is a quite simple discretization one can think of: if one produces more bins per feature,
which would be desirable, the resulting number of binary features (and qubits, consequently) would further increase, making the simulation/execution yet more intractable.
Therefore, in order to fit the dataset into current capabilities, we employ a series of pre-processing steps to the original raw features, which starts with a dimensionality reduction procedure implemented via a feature selection routine, in order to pick from the 198 original features, the ones that contribute the most to the classification, with the hope that they are enough to produce a good classifier. The procedure we use for picking the most important features is based on the Random Forest algorithm [67] -- in particular, a modification thereof, known as Extremely Randomized Trees [68]. It consists of the calculation of a "feature importance coefficient" which is also known as "mean decrease impurity" or "gini importance" [69]. This coefficient is calculated as the total decrease in each node impurity, averaged over all trees in the forest. The mean is weighted by the probability that the respective node is reached, which is often estimated by the proportion of observations reaching the node. A detailed account of this algorithm can be found in the Appendix VI.3.
Having in mind the discussion about the current capabilities of simulation and hardware, we have selected only the 4 most important features that correspond to the following two-body correlation terms. Physically, we expect that the most important features are the correlation functions \(\langle\sigma_{l}^{z}\sigma_{j}^{z}\rangle\) at the largest available distances. The reason is that this correlation detects long-range order associated with the spontaneous breaking of the \(\mathbb{Z}_{2}\) symmetry \(\sigma_{j}^{z}\mapsto-\sigma_{j}^{z}\) of the Hamiltonian. In the paramagnetic phase, \(\langle\sigma_{i}^{z}\sigma_{j}^{z}\rangle\) decays exponentially to zero for \(\left|i-j\right|\) larger than the correlation length, while it approaches a nonzero value in the ferromagnetic phase and oscillates with a four-site periodicity in the antiphase. By contrast, the correlation function \(\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle\) is nonzero for \(g\neq 0\) in all phases because the transverse magnetic field induces a finite expectation value for \(\sigma_{j}^{x}\).
We then proceed to the discretization of the features, using a procedure based on the \(k\)-means clustering algorithm [70]. More specifically, we use a k-bins discretizer, implemented in scikit-learn [71], which divides continuous data into \(k\) intervals or "bins". Essentially, we first cluster observations that are similar to the feature being discretized and use the clusters centroids as centers of the bins, that is, values in each bin will have the same nearest centroid, as determined by the 1-dimensional \(k\)-means algorithm. See Fig. 1 for an illustration. For each feature, we created 3 bins. At this point, our dataset is characterized by 12 discrete values, 3 for each one of the 4 features selected by the feature importance procedure.
After discretization, the features are binarized using the one-hot encoding procedure, which consists of creating \(l-1\) independent binary features for each column with \(l\) categorical levels, as illustrated in Fig. 2. In our case, since \(l=3\) for each discretized feature, we create \(l-1=2\) new binary features each, which then results in \(n=8\) independent binary features. This is the final dimensionality we work with.
Notice that, with 8 binary features, we will need 18 qubits for the circuit execution, which is a reasonable number for the circuit simulation or execution -- and, most importantly, it is enough to yield good results for the problem we analyze, as it will be shown. We could have chosen more than \(n=4\) features or discretized the features in more bins, which could possibly have increased the performance quantum classifier. In Sec. VI we further elaborate on this point.
Notice that after the feature selection, discretization and binarization pre-processing described above, some observations which were different in the original space of continuous features may be mapped to the same binary representation. This makes perfect sense, given that the different values of a given feature may fall in the same bin, which is given by a range of values. If this happens with all 4 features of two different observations, they will naturally be represented by the same 8-dimensional feature vector. This is why using a \(k\)-means binning strategy is a good idea (instead of the competing strategy "uniform", for example, in
Figure 1: **k-bins discretizer using uniform and k-means (\(k=3\)) binning strategies.****a)** k-bins discretizer with bins uniformly defined. The vertical red lines represent the bins’ limits. Notice that bin widths are uniform in the feature space, but the clustering structure is not respected. **a)** The green points represent the clusters centroids, and the vertical green lines, the bins’ limits. Notice how non-uniform bins are created, but the clustering structure is respected.
which all bins have identical widths, as depicted in Fig. 1b): given that bins are clusters, this strategy groups together similar observations in the same bin, so that it makes sense if they are represented by the same binary feature vector.
After the pre-processing routine, our original training dataset, which had 1000 rows and 198 continuous-valued columns, was reduced to a dataset with 8 binary features and only 10 rows. We can see that as a way of reducing the dataset only to the most representative samples and better explanatory features, which, as we will show below, was enough to yield good results with the quantum classification algorithm.
It is important to remark that the aforementioned feature importance and discretization processes were such that their respective parameters were determined only using the training data. That is, the exact same procedure was merely reproduced for all test datasets, but with parameters already defined in the training data. Now, although this is the correct thing to be done to avoid data leakage, there is a potential problem, especially with the discretization process: given that the features range varies a lot from training to testing data, it is possible that the resulting bins for testing data features will be concentrated, that is, all observations will fall in the same bin of a given feature. Effectively, this implies that such test observations will be characterized by less than 8 features, which is a problem because the QNN algorithm assumes that the test (input) observation has the same number of features as all training observations. In order to fix this, we pad such input observations with zeros, to guarantee that all binarized testing observations will be 8-dimensional. In practice, different observations will be identified only by the actual features present, and the padding will have no effect in terms of the classification, given that it will be the same for all observations in a given test dataset, as we observed. Indeed, as the results show, such a procedure does not jeopardize the classifier's performance.
As already remarked, remember that QNN is a lazy algorithm, so each test (input) observation is classified at a time. This means that, in principle, we would have to simulate/execute a number of circuits equal to the number of test observations, to have their prediction. Given that we have 10 testing sets, one for each \(k\) value. We consider \(k=0\) the training point. each one with 1000 observations, the number of circuit simulations/executions would be quite large. However, the aforementioned fact that different training observations in the original feature space may be mapped to the same binary representation is of course also true for the testing data observations (although the exact number of unique binarized testing observations vary among the different test datasets). Given that, we implement a cache: whenever we see a new testing observation (in terms of its 8 features), we pass it through the quantum circuit, simulating/executing it, and store its prediction. If this observation is repeated (which, again, can happen given the nature of the pre-processing routine), we don't run the circuit again, but instead merely replicate the prediction in the cache. This allows us to have a prediction for each one of the observations in the testing datasets, without having to simulate/execute the quantum algorithm that many times. Indeed, this is very important for resource optimization, in terms of simulation time or hardware calls for execution.
## V Machine learning the phase diagram of the ANNI model with QNN
Our aim is to understand whether transfer learning is possible using QNN. More specifically, all our training data consists of \(\kappa=0\) and we use that to predict phases at regions where \(\kappa\geq 0\). This is particularly relevant since for \(\kappa=0\) the model is analytically solvable, pointing out that a transition occurs at \(g\approx 1\). We highlight that for \(\kappa=0\) we have only two phases: either the ferromagnetic (phase 'o') or the paramagnetic (phase '1'). However, when \(\kappa\geq 0\) the ANNNI Hamiltonian also leads to a third phase, the antiphase (phase '2'), not contained in the training data. In particular, for \(\kappa\geq 0.5\), we are in a region where only phases 'o' and 'z' are present. So, the best the classifier algorithm can do is to output 'o' if the phase is indeed 'o' and '1' otherwise.
Actually, as discussed above, for an observation point, both the classical and quantum classifier algorithms will return a normalized probability vector \((p_{0},p_{1})\) where the assigned phase will correspond to the component with the largest value. As typical
Figure 2: **One-hot encoding procedure.** From a column with \(l=3\) categorical (discrete) levels, we construct \(l-1=2\) independent binary columns, which are the binary features. Notice that only \(l-1\) binary columns are necessary because it is possible to represent one of the levels as “oo” (in this example, “L3”).
with such algorithms, to determine when we are facing a transition, we plot both the probability components and check when they cross, as shown in Fig. 3. As can be seen in Fig. 4, using this approach, the QNN algorithm recovers the left part (\(\kappa<0.5\)) of the phase diagram, corresponding to the ferromagnetic/paramagnetic transition, very successfully. The precise prediction also holds as we approach the critical point at \(\kappa=0.5\) and \(\gamma=0\) at which a new antiphase appears. However, as we start to increase \(\gamma\) the approximated solutions in (2) and (3) and the QNN predictions start to differ more significantly, even though they remain qualitatively similar.
To benchmark our results we have compared the QNN solution with that obtained by the classical KNN algorithm. As can be seen in Fig. 4, the solution is significantly worse with the classical algorithm is fed with the same pre-processed data as the one given to the quantum algorithm. However, if the classical algorithm uses the complete data (not pre-processed) it reaches a similar success in the prediction, even though it is smaller as quantified by shown ahead. Importantly, the quantum classifier performs significantly better at the critical point (\(g=0,\kappa=1/2\)).
## VI Discussion
The detection of phase and phase transitions of the ANNNI model with both classical as well as quantum heuristic approaches have already been done. In Ref. [36], Canabarro et al. showed the possibility of applying both unsupervised and supervised classical techniques to achieve good results. In fact, the problem was satisfactorily well solved with unsupervised learning, reducing the use of supervised learning to a validation step. Therefore, they tried using transfer learning of diverse supervised learning algorithms trained solely on nearest-neighbor interactions exhibiting the capacity to discern a novel phase emerging upon the introduction of next-nearest-neighbor interactions. They showed that some of the learners could unveil the Ising as well as the antiphase and the paramagnetic phases. This amalgamation effectively groups the BKT and CIC transitions together,
\begin{table}
\begin{tabular}{l c} \hline Technique & average \(\ell_{2}\)-norm \\ \hline QNN (pre-processed) & **0.0036(6)** \\ KNN (pre-processed) & 0.0164(1) \\ KNN (raw data) & 0.0043(1) \\ \end{tabular}
\end{table}
Table 1: Performance (average \(\ell_{2}\)-norm with relation to the analytical approximations given by Eqs. 2 and 3) computed for the three main phases and comparing QNN with KNN (using both the pre-processed and complete training data). For the classical KNN we used \(k=7\) and the Euclidean distance. The best result is in boldface.
Figure 3: Detecting the critical transverse magnetic field coupling parameter \(g\) at which a phase transition occurs. The machine was trained at \(\kappa=0\) and asked to predict where the transition happens at \(\kappa=0.1\), by considering where the machine is most uncertain, that is, when the probabilities are closest \(p_{1}=p_{2}=1/2\). Here the ferromagnetic (paramagnetic) phase is labeled as 0 (1).
Figure 4: Phase diagrams produced with diverse (Q)ML algorithms when trained only with \(\kappa=0\): KNN trained with raw data (blue circles); KNN trained with the same pre-processe data as the QNN - fair comparison (black squares); QNN (red triangles), and two different analytical solutions: Ising (solid blue line) and BKT (dashed orange line). All different methods recover the ferro/paramagnetic and paramagnetic/BKT transitions qualitatively well, although, as it is quantitatively expressed in Table 1, the QNN solution yields the smallest MSE with relation to the analytical approximation, thus being an overall better solution (see main text for more details).
showcasing the robustness of our method. On the other hand, in Ref. [40], Monaco et al. used quantum convolutional neural networks by training only on marginal points of the phase diagram represented by integral models. Our approach in this paper is both innovative and complementary. We use a new and simpler quantum machine learning algorithm and apply transfer learning, we test some ideal preprocessing of the data to fit in a real quantum computer, and we train only on \(\kappa=0\), testing on the remaining phase diagram to show the viability of the technique as we discuss here.
We show that with the right pre-processing of the data, the quantum nearest neighbor (QNN) algorithm proposed in [41] allows for the classification of phases in a non-trivial Hamiltonian model. Using two-point correlation functions obtained by exact diagonalization on a small \(N=12\) spins lattice, we could reproduce the main phases of the axial next-nearest neighbor Ising (ANNNI) model. More precisely, using as training data only the ferromagnetic and paramagnetic phases, we could detect a transition for an antiphase by increasing the interaction strength of the next-nearest neighbor. This is a relevant instance of transfer learning, since using analytical data extracted from the exactly solvable transverse field Ising model, we could explore a non-integrable region of the Hamiltonian model. This makes the approach computationally cheaper as access to training labels is one of the major bottlenecks for any supervised method.
To benchmark the quality of our quantum machine model, we compared it with approximated expressions obtained by various methods. The solution provided by QNN works very well in the ferromagnetic and paramagnetic regions, offering a less precise but still qualitatively reasonable solution as we enter the antiphase. Arguably, however, to assess the quality of a quantum learning method, it is reasonable to compare its performance with that of classical learning algorithms. We performed this comparison, and the results are quite favorable to the quantum approach. Even when we feed the original data (without any pre-processing, a necessary step to reduce the number of qubits in the quantum realization) to classical classifiers, the quantum classifier remains superior, as can be seen in Fig. 4 and Table 1. And performing the fairest comparison, obtained when the quantum and classical algorithms see the same pre-processed data, the accuracy of the quantum classifier is significantly higher. Importantly, these performance comparisons were done on the testing data, that is, we were really evaluating the generalization capability of the different models, which is, after all, what matters the most when one builds a data-driven model.
This proof-of-principle (since it was obtained in a simulated/perfect quantum circuit) quantum advantage does not come in terms of algorithmic complexity, but rather in generalization and accuracy, which is of major interest in the context of machine learning. Still, one may interpret the advantage from a different point of view, namely that of sample complexity [72, 73, 74]: the quantum algorithm could find the underlying pattern reflected on the dataset with much less information than its classical counterpart, and with better generalization performance. As mentioned before, we can see that as a way of reducing the dataset only to the most representative samples and better explanatory features. Although we focus on a particular Hamiltonian, we believe it leads to relevant general questions: in a statistical learning theoretical sense, how and why such a sample complexity reduction and consequent quantum advantage is achieved? Similar questions have been addressed in recent research [75, 76, 77], and further research along this direction, in the particular context of QNN might lead to new insights. Another clear path is to understand how well the QNN classifier works in real NISQ devices, also considering different Hamiltonian models and increasing the number of qubits and features. In this regard, it would be interesting to consider other data formats, such as the classical shadows [78], efficiently encoding classical data about quantum systems for machine learning purposes [79].
In conclusion, to the best of our knowledge, this paper is the first work in which the QNN algorithm [41] was applied to a concrete classification problem in the context of condensed matter physics. By applying this method, we could achieve a quantum model whose generalization performance was superior to its classical counterparts, whilst using much less information, which represents a quantum advantage in both contexts of generalization and sample complexity. This is the main result of this paper, which opens the way to several discussions concerning the statistical learning theoretical properties of the QNN model.
## Acknowledgements
This work was supported by the Serrapilheira Institute (Grant No. Serra-1708-15763), the Simons Foundation (Grant Number 1023171, RC), the Brazilian National Council for Scientific and Technological Development (CNPq) via the National Institute for Science and Technology on Quantum Information (INCT-IQ) and Grants No. 307172/2017-1, the Brazilian agencies MCTIC, CAPES and MEC. AC acknowledges a paid license by the Federal University of Alagoas for a sabbatical at the University of Sao Paulo, and partial
financial support by CNPq (Grant No. 311375/2020 - 0), Alagoas State Research Agency (FAPEAL) (Grant No. APQ2022021000153) and Sao Paulo Research Foundation (FAPESP) (Grant No. 2023/03562 - 1).
|
2309.09710 | About optimization of methods for mixed derivatives of bivariate
functions | The problem of optimal recovering high-order mixed derivatives of bivariate
functions with finite smoothness is studied. On the basis of the truncation
method, an algorithm for numerical differentiation is constructed, which is
order-optimal both in the sense of accuracy and in terms of the amount of
involved Galerkin information. | Y. V. Semenova, S. G. Solodky | 2023-09-18T12:27:32Z | http://arxiv.org/abs/2309.09710v1 | # About optimization of methods for mixed derivatives of bivariate functions
###### Abstract.
The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.
\({}^{\dagger}\)_Key words_. Numerical differentiation, Legendre polynomials, truncation method, minimal radius of Galerkin information.
## 1. Description of the problem
Currently, many research activities on the problem of stable numerical differentiation have been taking place due to the importance of this tool in such areas of science and technology as finance, mathematical physics, image processing, analytical chemistry, viscous elastic mechanics, reliability analysis, pattern recognition, and many others. Among these investigations we highlight [4], which is the first publication on numerical differentiation in terms of the theory of ill-posed problems. Further research [4] has been continued in numerous publications on numerical differentiation (see, for example, [21], [30], [7], [8], [1], [9], [12], [31], [32], [10], [24]), covering different classes of the functions and the types of proposed methods. Despite the abundance of works on this topic, the problem for recovery of high order derivatives was considered only in a few publications, among which we note [5], [22], [11], [19], [12] and [24]. In particular, the results of [24] have opened perspective for further investigation of numerical methods for recovery of high order derivatives. Namely as the main criteria of method's efficiency have been taken its ability to achieve the optimal order of accuracy by using the minimal amount of discrete information. Note that particular these aspects of numerical differentiation remains still insufficiently studied. The present paper continues the research of [24], [23] and proposes a numerical method for recovering the high order mixed derivatives of smooth bivariate functions. The method is not only stable to small perturbations of the input data, but is also optimal in terms of accuracy and quantity of involved Fourier-Legendre coefficients, and also has a simple numerical implementation.
Let \(\{\varphi_{k}(t)\}_{k=0}^{\infty}\) be the system of Legendre polynomials orthonormal on \([-1,1]\) as
\[\varphi_{k}(t)=\sqrt{k+1/2}(2^{k}k!)^{-1}\frac{d^{k}}{dt^{k}}[(t^{2}-1)^{k}], \quad k=0,1,2,\ldots.\]
By \(L_{2}=L_{2}(Q)\) we mean space of square-summable on \(Q=[-1,1]^{2}\) functions \(f(t,\tau)\) with inner product
\[\langle f,g\rangle=\int_{-1}^{1}\int_{-1}^{1}f(t,\tau)g(t,\tau)d\tau dt\]
and standard norm
\[\|f\|_{L_{2}}^{2}=\sum_{k,j=0}^{\infty}|\langle f,\varphi_{k,j}\rangle|^{2}<\infty,\] |
2309.06423 | Accelerating Defect Predictions in Semiconductors Using Graph Neural
Networks | Here, we develop a framework for the prediction and screening of native
defects and functional impurities in a chemical space of Group IV, III-V, and
II-VI zinc blende (ZB) semiconductors, powered by crystal Graph-based Neural
Networks (GNNs) trained on high-throughput density functional theory (DFT)
data. Using an innovative approach of sampling partially optimized defect
configurations from DFT calculations, we generate one of the largest
computational defect datasets to date, containing many types of vacancies,
self-interstitials, anti-site substitutions, impurity interstitials and
substitutions, as well as some defect complexes. We applied three types of
established GNN techniques, namely Crystal Graph Convolutional Neural Network
(CGCNN), Materials Graph Network (MEGNET), and Atomistic Line Graph Neural
Network (ALIGNN), to rigorously train models for predicting defect formation
energy (DFE) in multiple charge states and chemical potential conditions. We
find that ALIGNN yields the best DFE predictions with root mean square errors
around 0.3 eV, which represents a prediction accuracy of 98 % given the range
of values within the dataset, improving significantly on the state-of-the-art.
Models are tested for different defect types as well as for defect charge
transition levels. We further show that GNN-based defective structure
optimization can take us close to DFT-optimized geometries at a fraction of the
cost of full DFT. DFT-GNN models enable prediction and screening across
thousands of hypothetical defects based on both unoptimized and
partially-optimized defective structures, helping identify electronically
active defects in technologically-important semiconductors. | Md Habibur Rahman, Prince Gollapalli, Panayotis Manganaris, Satyesh Kumar Yadav, Ghanshyam Pilania, Brian DeCost, Kamal Choudhary, Arun Mannodi-Kanakkithodi | 2023-09-12T17:40:23Z | http://arxiv.org/abs/2309.06423v2 | # Accelerating Defect Predictions in Semiconductors Using Graph Neural Networks
###### Abstract
First principles computations reliably predict the energetics of point defects in semiconductors, but are constrained by the expense of using large supercells and advanced levels of theory. Machine learning models trained on computational data, especially ones that sufficiently encode defect coordination environments, can be used to accelerate defect predictions. Here, we develop a framework for the prediction and screening of native defects and functional impurities in a chemical space of Group IV, III-V, and II-VI zinc blende (ZB) semiconductors, powered by crystal Graph-based Neural Networks (GNNs) trained on high-throughput density functional theory (DFT) data. Using an innovative approach of sampling partially optimized defect configurations from DFT calculations, we generate one of the largest computational defect datasets to date, containing many types of vacancies, self-interstitials, anti-site substitutions, impurity interstitials and substitutions, as well as some defect complexes. We applied three types of established GNN techniques, namely Crystal Graph Convolutional Neural Network (CGCNN), Materials Graph Network (MEGNET), and Atomistic Line Graph Neural Network (ALIGNN), to rigorously train models for predicting defect formation energy (DFE) in multiple charge states and chemical potential conditions. We find that ALIGNN yields the best DFE predictions with root mean square errors around 0.3 eV, which represents a prediction accuracy of 98 % given the range of values within the dataset, improving significantly on the state-of-the-art. Models are tested for different defect types as well as for defect charge transition levels. We further show that GNN-based defective structure optimization can take us close to DFT-optimized geometries at a fraction of the cost of full DFT. DFT-GNN models enable prediction and screening across thousands of hypothetical defects based on both unoptimized and partially-optimized defective structures, helping identify electronically active defects in technologically-important semiconductors.
## 1 Introduction
Semiconductors are critical for a variety of technologies such as consumer electronics, healthcare and biotechnology, information technology, communication and connectivity, automotive manufacturing, renewable energy, and industrial automation [1]. With the signing of the CHIPS Act [2], there has been a massive influx of funding into semiconductor R&D, resulting in the establishment of several manufacturing facilities and research centers across the United States as well as many global partnerships between companies and universities. Developing next-generation semiconductor materials is crucial for addressing global energy needs and the demands of the electronics industry, and this process begins at the atomistic scale with enhanced structure-property relationships that can scale up to device performance and aid data-driven materials design and improvement [3].
The electronic structure of a semiconductor is heavily dependent on the presence of point defects in its crystal lattice, which range from intrinsic vacancies, self-interstitials, and anti-site substitutions, to impurities at different lattice sites [4]. Defects can introduce energy levels within the band gap, affecting carrier concentration and mobility, and often acting as traps that lead to non-radiative recombination of holes and electrons in optoelectronic devices [5, 6, 7, 8, 9, 10]. Defects also play a crucial role in dopant activation, diffusion, and segregation, which are vital factors in device fabrication processes. Even at low concentrations, unwanted point defects or impurities can have a significant impact on the electronic, optical, and transport properties of semiconductors, making it important to be able to accurately predict their stability and electronic signatures [11].
One of the applications where the effect of defects is felt most is solar cells, where semiconductors such as Si and CdTe are commonly used as absorbers [7, 12]. Undesirable defects and functional dopants in semiconductor absorbers will respectively decrease and increase the optical absorption and thus the power conversion
efficiency of single-junction, tandem, and bifacial solar cells [13]. Similar effects are felt in applications such as transistors, photodiodes, lasers, sensors, and quantum information sciences [14, 15, 16, 17, 18]. Canonical group IV, III-V, and II-VI semiconductors are some of the most important materials used in these applications, either as binary compounds or alloys, typically in the zinblende (ZB) or wurtzite (WZ) phases. In addition to Si and CdTe, compounds such as GaAs, SiC, CdSe, and CdS have been used in photovoltaics (PVs). GaAs, GaN, ZnO, and InP are employed in optoelectronic devices such as light-emitting diodes (LEDs), laser diodes, quantum dots, and quantum wells. GaN, AlN, and related compounds are desirable wide band gap (WBG) semiconductors for power electronics [19]. Point defects negatively or positively impact the performance of each of these materials; in general, semiconductors with intrinsic defect tolerance and possible n-type or p-type dopability are desired for optoelectronic applications. Furthermore, defect levels in semiconductors (e.g., NV-centers in diamond) have also been suggested as qubits for quantum computing [20].
Experimentally, defect levels are measured using techniques such as cathodoluminescence and deep-level transient spectroscopy [21]. However, these methods face difficulties in sample preparation and assigning measured levels to particular defects; e.g., it is not trivial to determine whether a captured peak is from a specific vacancy or self-interstitial, or from an unwanted substitutional or interstitial impurity [12]. First principles-based density functional theory (DFT) computations have thus been widely used to calculate the formation energy (E\({}^{/}\)) of point defects as a function of Fermi level (E\({}_{F}\)), net charge in the system (\(q\)), and chemical potential (\(\mu\)) [22, 23]. Such computations help reliably identify the lowest energy donor and acceptor defects, all possible shallow and deep defect levels, the equilibrium conductivity as pinned by the lowest energy defects (p-type, intrinsic, or n-type), defect concentrations, electron/hole capture rates, and other related properties. When using an appropriate level of theory, DFT-computed charge transition levels have been shown to match remarkably well with measured levels [12]. Despite the successes of DFT, large-supercell charged calculations are rather computationally expensive, which makes it prohibitive to perform defect calculations across a very large chemical space.
Predicting defect properties can be significantly accelerated by combining DFT data with state-of-the-art machine learning (ML) models [12, 24]. Some recent efforts, including our own past work, have shown that regression models trained on a DFT dataset, using vectors encoding the identity and elemental properties of the coordinating atoms involved in creating a defect, can yield accurate prediction and screening over tens of thousands of defects and impurities [25, 26, 27]. In published work from 2022 [12], we trained ML models to predict the formation energies and charge transition levels of point defects in 34 ZB semiconductors with a roughly 90% prediction accuracy, which enabled qualitatively reliable screening of consequential impurities from across a list of \(>\) 12,000. However, these models suffer from the following limitations: (a) for a wide chemical space, composition-based models [28] require a significant amount of training data to accurately capture the complex relationships between the material and target properties, (b) all predictions are made for a supposed ground state defect structure which means no competing defective structures could be sampled and no lower energy defect structures could theoretically be found, (c) the errors are larger than desired, presumably due to lack of information in the model inputs about the defective geometry and how local coordination changes in the presence of different defects, and (d) the predictive models cannot trivially be applied to related but "out-of-sample" defects, such as complexes and alloys within the same material classes. A potential approach to tackle these issues arises in the form of a "crystal graph", which is the most general representation of a crystalline structure, automatically accounting for different supercell sizes, types of ionic species, mixing or added atoms, and metastable configurations.
Graph-based Neural Networks (GNNs) have been rising in popularity over the last few years, and are widely applied today for adequately representing and predicting properties of molecules, polymers, and inorganic crystals [29, 30, 31, 32, 33, 34, 35]. GNNs can work directly with graph-structured data, converting crystal structures into crystal graphs where the nodes are atomic positions and edges are chemical bonds. They are capable of learning internal representations of crystals useful for predicting properties ranging from formation or decomposition energy to band gap to defect formation energy. ML models based only on vectors encoding composition and/or elemental properties are typically not suited to deal with crystalline polymorphs of a given material, often requiring hand-crafted features that are not generalizable. GNNs are known to be much more flexible than composition-based models, as they can be normalized with respect to system size or number of atoms, and have the ability to capture important structure/chemistry information that contribute to properties of interest. By learning the global representation of crystals, GNNs can incorporate physical laws and phenomena on
larger scales and be used to predict properties that are affected by long-range interactions. Predictive models trained using GNNs show much better accuracy than models that lack structural/geometry information.
In this article, we present one of the most comprehensive efforts undertaken to date for predicting defect properties in semiconductors, by combining a massive high-throughput (HT) DFT dataset of defect formation energies (DFEs) with state-of-the-art GNNs. We utilize our previously published dataset [12, 24], bolstered by the inclusion of thousands of partially-optimized and unoptimized structures in addition to optimized structures, along with several new computations, and train predictive models using three types of established GNN schemes: Crystal Graph Convolutional Neural Network (CGCNN) [36], Materials Graph Network (MEGNET) [37], and Atomistic Line Graph Neural Network (ALIGNN) [38]. We train GNN models on datasets ranging from a few thousand to nearly 15,000 data points for point defects in different charge states, across 40 or so unique ZB compounds. We present a description of model optimization and visualization of the prediction results for different defect types and show how (a) ALIGNN predictions significantly improve upon previous DFE estimates, with a root mean square error (RMSE) of \(\sim\) 0.3 eV, (b) predictions can be made for defect complexes and alloyed compositions by including a subset of them in the training data, and (c) GNN predictions for new systems can be used both for screening based on upper-bound energies as well as for stabilizing any defect by changing the defect coordination environment until the GNN-predicted formation energy minimizes. We believe this provides a novel and promising approach towards predicting defect energetics and screening important defects from large chemical spaces. Considerations of the level of theory and future prospects of this work are additionally discussed. The DFT+GNN workflow applied for predicting defect properties is presented in **Fig. 1**.
## 2 Computational dataset
The semiconductor+defect chemical space considered in this work is pictured in **Fig. S1** in terms of the group IV, III-V, and II-VI binary ZB compounds (referred to henceforth as AB, with A being the cation and B the anion) that serve as hosts to defects, elements selected from across the periodic table as possible defects (henceforth referred to as M for any substitutional or interstitial defect, whereas vacancies will use V), and 5 possible symmetry inequivalent defect sites, namely A-site (M\({}_{A}\)), B-site (M\({}_{B}\)), tetrahedral interstitial site with 4 neighboring A atoms (M\({}_{i,A}\)), tetrahedral interstitial site with 4 neighboring B atoms (M\({}_{i,B}\)), and octahedral interstitial site with 3 A and 3 B atoms in the neighborhood (M\({}_{i,oct}\)). Here, we utilize 4 types of datasets: dataset 1 (all possible single native defects in 34 binary ZB semiconductors), dataset 2 (hundreds
Figure 1: DFT+GNN workflow to accelerate the prediction of defect formation energies and charge transition levels in semiconductors.
of substitutional and interstitial impurities across all ZB compounds), dataset 3 (native defects and impurities in a variety of CdSe\({}_{x}\)Te\({}_{1-x}\) alloys), and dataset 4 (some defect complexes simulated in CdTe). Datasets 3 and 4 arise from a parallel study on dopant-activation in CdTe-based solar cells [39] and are used here to evaluate the effectiveness of GNN-based defect predictions for alloys and complexes. All datasets contain DFEs calculated for at least 5 distinct charged states (q = 0, +2, +1, -2, -1) at two extreme chemical potential conditions, namely A-rich and B-rich. **Fig. 2** shows violin plots capturing the distribution of DFEs (only for neutral defects at A-rich chemical potential conditions) for all 4 datasets, with inset box plots showing the median, lower quartile, and upper quartile for each.
The DFT details, including specific VASP input parameters, level of theory information, reciprocal space sampling, references for chemical potentials, and equations used for DFE calculation, are present in our past publication [12]. All data is from the semi-local GGA-PBE functional, which generally reproduces lattice parameters and relative stabilities quite well, but is not reliable for electronic band edges and band gaps, which is likely to affect computed defect levels as well. The non-local hybrid HSE06 functional [40] is preferred for electronic and defect properties, but is much more expensive and often requires tuning of the mixing parameter (which determines the fraction in which exact exchange from Hartree-Fock is combined with the exchange-correlation energy from PBE), which is very system-specific and not trivially applied across large datasets [41]. Beyond-DFT methods such as the GW approximation, which expands the self-energy in terms of the single-particle Green's function G and the screened Coulomb interaction W [42], are more reliable but too prohibitively expensive to be applied high-throughput. In past work [12], we showed that PBE computed defect charge transition levels, when plotted to span the experimentally-known band gap of the semiconductor, match rather well with measured defect levels for a dataset of \(\sim\) 80 defects in binary ZB compounds collected from the literature, showing a PBE vs experiment RMSE of 0.21 eV.
Thus, PBE-level DFEs and transition levels may be sufficient for a first-level screening of low-energy defects. Inaccuracies will still persist from incorrectly locating VBM and CBM, but appropriate corrections can be applied afterwards using different higher-accuracy bulk calculations once PBE-level DFEs are predicted for multiple \(q\) and \(\mu\) conditions. Two such possible correction methods include using the modified band alignment approach based on PBE and HSE06 band gap values [43], and shifting both band edge positions using GW quasiparticle energies [44]. The focus of present work is to demonstrate the accelerated prediction of PBE-level defect energetics, and the aforementioned corrections will be considered in future work. In the next few subsections, a detailed description is provided for the four datasets generated at the PBE level.
### Dataset 1
Dataset 1 contains DFEs for all possible native defects in each AB compound, namely V\({}_{A}\) (vacancy at A-site), V\({}_{B}\), A\({}_{iA}\) (A self-interstitial at A-coordinated tetrahedral site), A\({}_{iB}\), A\({}_{iA,oct}\) (A self-interstitial at octahedral site), B\({}_{iA}\), B\({}_{iB,B}\), B\({}_{i,oct}\), A\({}_{B}\) (A on B anti-site substitution), and B\({}_{A}\). All AB compounds, simulated in the cubic ZB structure with A atoms occupying an FCC lattice and B atoms occupying tetrahedral sites, are listed as follows: 8 II-VI compounds (ZnO, ZnS, ZnSe, ZnTe, CdO, CdS, CdSe, and CdTe), 16 III-V compounds (AlN, AlP, AlAs, AlSb, BN, BP, BAs, BSb, GaN, GaP, GaAs, GaSb, InN, InP, InAs, and InSb), and 10 group IV compounds (SiC, GeC, SnC, SiGe, SiSn, GeSn, C, Si, Ge, and Sn). There are a total of 312 possible native defects across the 34 compounds, and DFEs were computed for all under both A-rich and B-rich conditions. From each of the 312 (\(\times\) 5 \(q\) states) PBE geometry optimization calculations, we collected all possible "intermediate structures", that is, geometries generated during
Figure 2: Defect formation energy distribution across the four datasets, for neutral defects under A-rich chemical potential conditions.
the course of the optimization, all the way from pristine structure (which is simply the ground state semiconductor bulk supercell structure with a defect introduced) to the fully optimized structure; also collected were the total DFT energies corresponding to each structure. The shell script used to collect intermediate structures (_IS_) for every defect from XDATCAR and corresponding energies from OUTCAR (typical output files in VASP [45]) is added to the SI and available on GitHub. The DFE corresponding to every _IS_ is estimated as E\({}^{\prime}\)(_IS_) = E\({}_{DFT}\)(_IS_)- E\({}_{DFT}\)(optimized structure) + E\({}^{\prime}\)(optimized structure).
This approach helps swell the DFT native defect dataset to 3071 data points for q = 0, and between \(\sim\) 1500 and \(\sim\) 2000 points for the other q values, as reported in **Table 1**. The lower number of structures for the charged systems as compared to the neutral defects comes from the fact that most of the geometry optimization takes place during the neutral calculation, whereas the charged calculations typically use the optimized neutral defect geometry as their starting point. All the _IS_ serve as energetically accessible but not ground state defective structures, which can play an important role in understanding the dynamics and behavior of the crystal, but also provide an energetically and structurally diverse dataset for a GNN model to be trained on. This also ensures that the GNN "knows" what an unoptimized, partially optimized, and fully optimized defect structure looks like, meaning it will yield the correct energy corresponding to any hypothetical new structure and potentially help reduce the energy by subtly modifying the defect coordination environment.
### Dataset 2
Dataset 2 contains hundreds of impurities or extrinsic defects (M) at various sites, namely M\({}_{A}\), M\({}_{B}\), M\({}_{I,A}\), M\({}_{I,B}\), and M\({}_{i,oct}\), across each of the 34 unique AB compounds. The five distinct defect sites are considered in 30 binary compounds and three defect sites (A-site, one tetrahedral interstitial site, and one octahedral interstitial site) are considered in the elemental systems (C, Si, Ge, and Sn). This dataset encompasses a wide range of singular impurity atoms, including groups I to VII, all transition metals, and all lanthanides, leading to a total of 77 species, as shown in **Fig. S1**. The total number of possible impurities resulting from this can be calculated as 77 \(\times\) 5 \(\times\) 30 + 77 \(\times\) 3 \(\times\) 4 = 12,474 (many of these would coincide with the native defects described earlier). Out of this dataset of 12,474 defects, 1566 were chosen for complete neutral-state geometry optimization, and \(\sim\) 1000 were subjected to charged calculations as well; points in the DFT dataset exhibit sufficient chemical diversity in terms of semiconductor type, element type, and defect site type, to adequately represent the entire chemical space. Once again, we collected all IS from the DFT optimization runs for each impurity in 5 different q states, leading to nearly 14,000 data points for q=0 and between 3500 and 5300 data points for the other q values, as reported in **Table 1**.
### Dataset 3
This dataset includes several possible native defects in a series of CdSe\({}_{x}\)Te\({}_{1-x}\) alloys (x = 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, and 0.875), namely V\({}_{Cd}\), V\({}_{Te}\), V\({}_{Se}\), Cd\({}_{i}\), Te\({}_{i}\), Se\({}_{i}\), Cd\({}_{Te}\), Cd\({}_{Se}\), Te\({}_{Cd}\), and Se\({}_{Cd}\). This results in a total of 82 unique defects across the 7 mixed compositions, which are of interest in CdTe solar cells where Se is often mixed at the anion site [46, 47, 48, 49, 50]. DFEs are computed for each defect in 5 q states at Cd-rich and Te/Se-rich conditions to obtain the two extreme ranges of energies, and all the _IS_ are collected from the optimization runs. Total datasets of a few hundred points are thus compiled, as shown in **Table 1**. This dataset will help examine whether GNN models trained on defects in pure unmixed compositions (CdTe and CdSe) are applicable to alloyed compositions in the same chemical space, and how many new alloy data points might need to be added to the training set to achieve satisfactory accuracy.
### Dataset 4
Finally, we posit that crystal graphs should be capable of representing any type of defect complexes as well in addition to the single-point defects described above. For exhaustively investigating defect tolerance and dopability of a semiconductor, it is vital to consider likely defect com
\begin{table}
\begin{tabular}{|c|c|} \hline
**Dataset** & **Data Points** \\ \hline Native defects in 34 compounds (Dataset 1) & 2053 (q=+2), 1840 (q=+1), 3071 (q=0), 1966 (q=-1), 1498 (q=-2) \\ \hline Impurities in 34 compounds (Dataset 2) & 5326 (q=+2), 3990 (q=+1), 13966 (q=0), 3568 (q=-1), 4628 (q=-2) \\ \hline Native defects in CdSe\({}_{x}\)Te\({}_{1-x}\) (Dataset 3) & 291 (q=+2), 322 (q=+1), 734 (q=0), 305 (q=-1), 329 (q=-2) \\ \hline Defect complexes in CdTe (Dataset 4) & 47 (q=0) \\ \hline \end{tabular}
\end{table}
Table 1: Number of data points for each charge state q across Datasets 1, 2, 3, and 4.
plexes, which are typically multiple point defects or impurities that form simultaneously in the lattice and interact with each other. Examples include Schhotky and Frenkel defects, and compensating vacancies or interstitials that form along with dopants. The V\({}_{Ga}\)-\(\Omega_{N}\)-2H triple complex was found to have a very low energy in GaN [51], and it has recently been suggested that As-O based complexes may form in CdTe [39]. Thus, we simulated a series of complexes such as V\({}_{Cd}\)+As\({}_{Te}\) and V\({}_{Te}\)+Cu\({}_{Cd}\) in CdTe, resulting in a small dataset of 47 points of neutral state defects, for both Cd-rich and Te/Se-rich conditions, including all the _IS_.
## 3 DFT optimized vs unoptimized formation energy
Before training GNN models, we analyzed the DFT datasets to determine the scale of the differences between DFEs from full DFT-optimization and from pristine, unoptimized defect structures. For any hypothetical defect, an unoptimized pristine structure could be trivially generated simply by inserting the defect of interest in an optimized bulk supercell, but obtaining the ground state DFE is a matter of optimizing this structure, which would involve a PBE calculation that runs for minutes, hours, or days, depending on the nature of the defect. The purpose of GNN-based on-demand predictions is to reduce this time drastically. Since any new GNN predictions would likely be made using pristine defect structures, it is informative to examine, for a given type of defect, how low the energy could go starting from the unoptimized DFE if full optimization were to be performed.
**Fig. 3** shows unoptimized DFE plotted against the fully optimized DFE, for the dataset of 312 native defects across 34 AB compounds (Dataset 1), at both A-rich and B-rich conditions. The unoptimized DFEs are obtained by performing fixed-volume single ionic step calculations on the pristine defect-introduced structures. The dataset is divided into vacancies, anti-site substitutions, and self-interstitials. It can be seen that the amount of geometry relaxation happening in vacancy structures is minimal, with the two types of DFEs almost always being very similar to each other. On the other hand, interstitials and anti-sites often undergo major atomic rearrangement, such that the optimized defect coordination environment may look starkly different from the starting structure, thus leading to DFE differences ranging from 1 eV to nearly 8 eV. The large differences for interstitials could be understood in terms of the unfavorability of introducing an extra cation or anion in a tetrahedral or octahedral void; the larger the ionic radii, the greater this unfavorability. Substitutions depend on the size differences between A and B, and may thus show either a low or high energy variability. These trends roughly inform the threshold that must be applied upon unoptimized DFEs (say, from GNN predictions) to determine their likelihood of stability upon full optimization. It should be noted that the intermediate structures collected from each "optimization run" essentially span the range of the unoptimized to optimized DFE, yielding hundreds of structures for some defects and only a handful for others.
Figure 3: DFT unoptimized vs optimized neutral defect formation energy in Dataset 1, under (a) A-rich, and (b) B-rich chemical potential conditions.
Graph Neural Network Architecture
In this section, we briefly describe the technical details behind the three GNN schemes chosen in this study, namely GGCNN, MEGNET, and ALIGNN.
### Crystal Graph Convolutional Neural Network (CGCNN)
CGCNN, developed by Xie et al. [36], is a deep learning architecture that takes a crystal graph as input and applies a series of convolution and pooling operations to extract features that capture the underlying properties of the crystal. These features are subsequently fed into a fully connected neural network (FCNN) to make predictions of the properties of interest [52]. The CGCNN framework is pictured in **Fig. S2(a)** and its operation is described below:
1. Structure representation: The crystal structure is represented as a graph where the atoms are nodes and the bonds are edges. The nodes and edges are labeled with features such as atomic coordinates, and bond length.
2. Graph convolutional layers: CGCNN applies multiple graph convolutional layers to the input graph wherein each layer aggregates information from neighboring nodes and edges and learns features that capture the local environment. Generally, the convolution operation includes computing a weighted sum of the features of neighboring nodes and edges, followed by a non-linear activation function [53, 54] : \[H^{(l+1)}=\sigma\left(\sum_{j\in\mathcal{N}(i)}W^{(l)}h_{j}^{(l)}+W^{(l)}h_{i} ^{(l)}\right)\] (1) Here, \(H^{(l+1)}\) is the output feature matrix of the \((l+1)\)-th layer, \(W^{(l)}\) is the weight matrix of the \(l\)-th layer, \(\sigma\) is a non-linear activation function, \(h_{i}^{(l)}\) is the feature vector of node \(i\) in layer \(l\), and \(\mathcal{N}(i)\) is the set of neighboring nodes of node \(i\) in the graph.
3. Pooling layer: The output of the last convolutional layer is passed through a global pooling layer (e.g., min, max pooling), which aggregates the features of all nodes in the graph into a single vector [53, 54]. This vector contains information about the entire crystal structure, including all atomic coordinates, bond distances, and well-known elemental properties of each atom such as ionization energy and electronegativity. \[h_{\text{pool}}=\frac{1}{N}\sum_{i=1}^{N}h_{i}^{(L)}\] (2) Here, \(h_{pool}\) is the output feature vector of the pooling layer, \(N\) is the total number of nodes in the graph, and \(h_{i}^{(L)}\) is the feature vector of node \(i\) in the last layer \(L\).
4. Fully connected neural network (FCNN): Finally, the output of the pooling layer is fed into an FCNN, which is trained like a regular NN-regression model to make predictions. \[y=f\left(W_{fc}h_{pool}+b_{fc}\right)\] (3) Here, \(y\) is the predicted property, \(W_{fc}\) is the weight matrix of the FCNN, \(b_{fc}\) is the bias vector, \(h_{pool}\) is the output feature vector of the pooling layer, and \(f\) is a non-linear activation function such as ReLU or sigmoid.
### Materials Graph Network (MEGNET)
The MEGNET framework was developed and released by Chen et al. in 2019 [37] and is pictured in **Fig. S2(b)**. MEGNET uses elemental embeddings to encode periodic chemical trends that can be used to improve the performance of models with limited training data. Elemental embeddings are vector representations of elements that capture their chemical properties that are typically learned from a large dataset of crystals. When a new crystal is encountered, the embeddings can be used to predict the properties of interest. The MEGNET methodology is described below:
1. Graph representation of materials: MEGNET represents the crystal as a graph where the atoms are the nodes, and the edges represent the connections between the atoms. Each atom is associated with a set of features such as its atomic number, coordinates, and chemical environment.
2. Message passing algorithm: MEGNET uses a message-passing algorithm to capture the interactions between atoms in the crystal. Each atom sends a message to its neighboring atoms, which is a function of the node and edge features. The messages are then aggregated at each node and the resulting feature vector is used as input to a neural network.
3. Readout layer: The output of the message-passing algorithm is passed through a readout layer which maps the learned node and edge features to target properties, and a loss function is calculated to capture the difference between the predicted and actual values.
### Atomistic Line Graph Neural Network (ALIGNN)
ALIGNN is a novel approach developed by Choudhary et al. [38], that differs from CGCNN and MEGNET in terms of considering three-body interactions (bond angles) as well in addition to two-body terms (bond lengths). ALIGNN
leverages both graph convolutional layers and line graph convolutional [55] layers to capture both short-range and long-range correlations in the crystal. The framework (pictured in **Fig. S2(c)**) is described below:
1. Atomic feature extraction: ALIGNN takes as input a graph representing the atomic structure of the crystal. Each node (atom) in the graph is associated with a set of atomic features, which includes properties such as electronegativity, group number, covalent radius, valence electrons, ionization energy, electron affinity, atomic block, and atomic volume. Each edge (bond) in the graph is associated with both the bond angle and bond distance.
2. Graph convolutional layers: ALIGNN uses graph convolutional layers to update the feature vectors of each node based on the features of its neighboring nodes. In each layer, the feature vectors are updated using a weighted sum of the features of neighboring nodes, similar to other models. This step captures short-range interactions in the structure.
3. Line graph construction: To capture long-range correlations, ALIGNN constructs a line graph on top of the original crystal graph. The line graph has nodes that represent unique bonds between atoms, corresponding to edges in the crystal graph. The line graph edges connect pairs of bonds that share a central atom in the crystal graph, effectively capturing bond angle information. ALIGNN then applies another set of graph convolutional layers to the line graph, which updates the feature vectors of each bond based on the features of neighboring bonds. The information from the line graph is then propagated back to the original crystal graph to update the bond features in combination with the node features.
4. Feature refinement: After the line graph convolution, ALIGNN refines the feature vectors for each edge using a set of learnable transformations that help capture more complex interactions between atoms and bonds.
5. Graph pooling: ALIGNN aggregates the refined bond-level feature vectors into a single graph-level feature vector using a graph pooling operation that summarizes the relevant information from the entire crystal graph.
6. Output prediction: Finally, the graph-level feature vector is fed to an FCNN for making predictions.
## 5 Results and Discussion
### Testing GNN models on Dataset 1
As a first step, we tested the performance of CGCNN, MEGNET, and ALIGNN for predicting the q=0 E\({}^{f}\) for Dataset 1 only. For each model, the data is split 60:20:20 into training, validation, and test sets. The CGCNN training protocol has several important hyperparameters that must either be kept fixed at recommended values or rigorously tuned over a set of possible values, such as the number of properties used in the atom feature vector, the number of convolutional layers, the length of the learned atom feature vector, the number of hidden layers, the regularization term, the scaling factor of the Gaussian initialization of weights, step size of the Adam optimizer [56], dropout fraction, batch size, number of epochs, and the cut-off distance r\({}_{c}\) for constructing the crystal graph. Here, we optimized only the batch size, epochs and r\({}_{c}\), keeping the rest of the hyperparameters the same as in the original CGCNN publication [36]. A parity plot for the best CGCNN predictions on Dataset 1 (for A-rich conditions) is pictured in **Fig. 4(a)**, showing RMSE of 0.19 eV, 0.41 eV, and 0.35 eV respectively on the training, validation, and test sets. These errors are already far lower than our previously published DFE test prediction errors of \(\sim\) 0.6 eV for defects in Cd-based chalcogenides [24] and \(\sim\) 1 eV across many group IV, II-VI, and III-V semiconductors [12], as well as being highly competitive with the state of the art for such predictions. Learning curves showing how the CGCNN errors converge as the epochs, batch size, and r\({}_{c}\) increase are presented in **Fig. S3**.
Next, we trained a MEGNET model as shown in **Fig. 4(b)** following the same strategy. Notable hyperparameters include the number of interaction blocks, number of hidden layers, hidden layer size, learning rate, regularization strength, dropout rate, batch size, activation function, number of features assigned to each bond in the input graph, and r\({}_{c}\). Here, we only optimized the number of epochs, batch size, and r\({}_{c}\), and the rest of the parameters are directly adopted from the MEGNET publication [37]. We find that MEGNET once again performs much better than previous composition-based models, but shows slightly larger errors than CGCNN with RMSE of 0.36 eV, 0.42 eV, and 0.40 eV on the training, validation, and test sets, respectively. The test error is close enough to the CGCNN error of 0.35 eV to justify the use of MEGNET over CGCNN, especially since MEGNET significantly corrects any possible overfitting in the CGCNN models by yielding roughly similar training, validation, and test errors. MEGNET has a more complex model architecture than CGCNN and in
cludes elemental embeddings encoding periodic chemical trends which may allow better generalization to unseen data.
Finally, we trained an ALIGNN model on Dataset 1 and found that it yields better performance than both CGCNN and MEGNET, with a slight concern of model overfitting alleviated by the remarkably low values of validation and test RMSEs. As shown in **Fig. 4(c)**, the test RMSE is 0.15 eV, which represents a 99 % accuracy considering the DFE values range from 0 eV to 15 eV. The reason for the improvement from CGCNN and MEGNET could be attributed to the line graph convolution step that captures long-range interactions, which may be important for particular point defects whose presence affects atomic arrangements beyond just the first nearest neighbor shell, causing larger lattice distortions. For training ALIGNN models, we use r\({}_{c}\) = 8 A, 12 nearest neighbors, 4 graph convolutional layers, 4 line graph layers, learning rate = 0.001, batch size = 16, an Adamw optimizer, and 150 epochs. Results of hyperparameter optimization with ALIGNN are presented in **Fig. S4**, which is a more computationally expensive step than for the other two methods, motivating the use of much of the same hyperparameters as in past work [38]. However, the training time is still reasonable and the accuracy payoff is immense; thus, we use ALIGNN as the GNN scheme of choice going forward, and discuss prediction performances for Datasets 2, 3, and 4 in the next subsection. **Table 2** lists the optimized training, validation, and test set RMSE values for CGCNN, MEGNET, and ALIGNN models trained on Dataset 1.
### Extending ALIGNN to Datasets 2, 3, and 4
To determine whether the GNN models optimized so far could directly be applied to impurities, complexes, and alloys, we first tested the ALIGNN model trained on dataset 1 for their predictive power on datasets 2, 3, and 4, before suitably re-optimizing the models by adding more training data points. **Fig. 5** shows the prediction performance of the ALIGNN model trained only on dataset 1 for dataset 2 (a), dataset 3 (b), and dataset 4 (c), along with the improvement in the predictions when 50 % of any new dataset is added to the training set and the ALIGNN model is retrained. The RMSE for the dataset 1-trained ALIGNN model is as high as 2.17 eV for dataset 2, 2.68 eV for dataset 3, and 2.98 eV for dataset 4, showing very poor performances that become worse going from native defects to impurities to alloys to complexes. The structure-property relationships learned from native defects alone cannot be generalized to extrinsic species or non-binary compounds, or indeed, the presence of multiple simultaneous defects that will inevitably cause far higher distortions and atomic rearrangements compared to single defects.
Upon combining 50 % of each dataset (chosen randomly) with dataset 1 and re-optimizing the model using the same approach as before, and performing necessary hyperparameter optimization anew, RMSE values improve drastically to 0.36 eV for dataset 2, 0.70 eV for dataset 3, and 0.18 eV for dataset 4. These errors will go further down as more data is added to the training set, showing that each type of defect data needs to be adequately represented during the training process for generalizing the ALIGNN predictive power. This exercise provides insights into the challenges associated with representing and predicting the energetics of defects in different defect categories with a limited training dataset and demonstrates the importance of training ML models on comprehensive datasets to improve their performance
Fig. 4: Parity plots for rigorously optimized (a) CGCNN, (b) MEGNET, and (c) ALIGNN models, trained on Dataset 1 for A-rich chemical potential conditions and q = 0.
across various defect types.
Next, we trained predictive models by combining all four datasets, for all charge states and chemical potential conditions. For charged defects, the DFE value is taken to be E\({}^{\prime}\)(E\({}_{F}\)=0), because by predicting this value for each charge state, the E\({}^{\prime}\) vs E\({}_{F}\) plot can be extended across the band gap region by using straight lines with slope = q. Thus, a total of 10 different ALIGNN models are trained for the combined dataset, for E\({}^{\prime}\)(q= + 2, E\({}_{F}\)=0), E\({}^{\prime}\)(q= + 1, E\({}_{F}\)=0), E\({}^{\prime}\)(q=0, E\({}_{F}\)=0), E\({}^{\prime}\)(q=-1, E\({}_{F}\)=0), and E\({}^{\prime}\)(q=-2, E\({}_{F}\)=0), at A-rich and B-rich chemical potential conditions. As seen in **Fig. S5**, there are a handful of outliers in the parity plots, especially in the case of E\({}^{\prime}\)(q=0, E\({}_{F}\)=0), which may result from some structures getting stuck in local minima during DFT optimization, and possible inaccuracies in the GNN model that may be fixed with more data and/or hyperparameter optimization. We removed a few notable outliers and trained the models again, to obtain the best ALIGNN predictive models that may be applied for new data points. The q= +1, q=0, and q=-1 ALIGNN models at A-rich conditions are pictured in **Fig. 6(a), (b)**, and **(c)**, respectively, and the remaining 7 models are presented in **Figs. S6** and S7. These models show very respectable RMSE values, suggesting that ALIGNN is capable of effectively learning the structure-property correlations in each dataset and each charge state, and generalizing them across all data types. Test prediction errors for q= +2, q= +1, q=0, q=-1, and q=-2 are found to be 0.30 eV, 0.23 eV, 0.32 eV, 0.25 eV, and 0.26 eV, respectively, representing a 98 % accuracy. The slightly larger errors for the neutral defects arise from the larger structural diversity for q=0 defect structures compared to the charged defect structures, also manifesting in much larger numbers of q=0 data points (e.g., 13,966 in dataset 2) than q=+1 (3990 in dataset 2) or q=-1 (3568 in dataset 2). The training, validation, and test errors for the best ALIGNN models for different charge states under A-rich conditions are listed in **Table 3**.
### ALIGNN-unoptimized vs DFT-optimized energies
The next objective is to utilize the best ALIGNN models to make predictions for new defects and perform screening of potentially low-energy defects based on predicted DFEs. The caveat here is that for any new defect, one could only really generate an "unoptimized" defect structure, and thus the only ALIGNN prediction that can be made is the unoptimized DFE. As described earlier, a full DFT optimization of any defect structure is obviously a time-consuming step, involving structural relaxation until atomic forces become close to zero and the energy of the crystal reaches a minimum. In contrast, ALIGNN prediction of unoptimized DFE can be performed in seconds and thus used to estimate the unoptimized energies of hundreds of thousands of defect structures, which could then be used
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**GNN Scheme** & **Train RMSE (eV)** & **Validation RMSE (eV)** & **Test RMSE (eV)** \\ \hline CGCNN & 0.19 & 0.41 & 0.35 \\ \hline MEGNET & 0.36 & 0.42 & 0.40 \\ \hline ALIGNN & 0.03 & 0.16 & 0.15 \\ \hline \end{tabular}
\end{table}
Table 2: Training, validation, and test set RMSEs for different GNN models trained on Dataset 1 at A rich conditions.
Figure 5: Parity plots for ALIGNN models trained on (q=0, A-rich) (a) Dataset 1 + Dataset 2, (b) Dataset 1 + Dataset 3, and (c) Dataset 1 + Dataset 4. (l) refers to models trained purely on Dataset 1 and tested on Dataset 2, 3 or 4, whereas (l) refers to 50 % of the new dataset added to the training set with Dataset 1.
as a surrogate for screening based on some upper bound values [57, 58]. However, ALIGNN predictions could also, in theory, be used to replace DFT-based energy estimates within a gradient descent-type optimization process [59, 60, 61], or using brute-force structure generation and energy evaluation, and thus quickly yield low energy structures at near-DFT accuracy for any hypothetical crystalline defects.
To test the correspondence between ALIGNN-unoptimized energies and DFT-optimized energies, we plotted the ALIGNN-predicted E\({}^{/}\)(q=0) (ALIGNN-unopt) on pristine structures of all defects across datasets 1, 2, 3, and 4, against the DFT-optimized E\({}^{/}\)(q=0) (DFT-opt), in **Fig. 7 (a)**. We expect the ALIGNN-unopt DFE values to always be higher than the DFT-opt values, and this is indeed the case of \(>\) 95 % of the data points. However, we do find the opposite to be true for several defects, which is perhaps a consequence of the statistical nature of ML predictions, which are very accurate on average but may show large errors for certain outliers. Importantly, notable differences between ALIGNN-unopt and DFT-opt values are seen for many defects where large structural changes are expected upon full relaxation. Some examples include V\({}_{Si}\) in SiC (8.28 eV difference), V\({}_{In}\) in InN (7.63 eV difference), and Cs\({}_{i,\alpha r}\) in SiC (7.17 eV difference). By examining the 300 defects out of this total list of 1747 defects (plotted in **Fig. 7 (a)**) which show the largest ALIGNN-unopt vs DFT-opt energy differences, we find an average difference of \(\sim\) 1 eV; thus, we could apply a rough general equivalence of DFT-opt = ALIGNN-unopt - 1 eV, and use this information to perform a high-throughput screening of likely low energy defects. Similar trends are expected to hold for ALIGNN-unopt vs DFT-opt energies of charged defects as well.
Looking at only the DFT-opt values, we find that 170 defects have DFE \(<\) 0 eV, with some examples including N\({}_{S}\) in ZnS (-3.76 eV), N\({}_{As}\) in GaAs (-3.2 eV), and N\({}_{Te}\) in CdTe (-2.64 eV). Whereas a look at the ALIGNN-unopt values yields 351 defects with DFE \(<\) 1 eV which includes all 170 low energy defects from DFT, meaning that the suggested upper-bound energy screening procedure should help account for all potentially low energy defects in addition to a few unstable defects which may be eliminated subsequently. On average, the computational cost for optimizing a single-point defect in a 64-atom supercell amounts to approximately 2500 core hours for the neutral state and 1000 core hours each for charged calculations. Running only static, single-shot calculations on pristine structures requires around 400 core hours in total. On the other hand, ALIGNN-unopt predictions are made in seconds with minimal computing expense, and it is imperative to
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Charge** & **Train RMSE (eV)** & **Validation RMSE (eV)** & **Test RMSE (eV)** \\ \hline q = +2 & 0.10 & 0.25 & 0.30 \\ \hline q = +1 & 0.07 & 0.27 & 0.23 \\ \hline q = 0 & 0.20 & 0.30 & 0.32 \\ \hline q = -1 & 0.11 & 0.26 & 0.25 \\ \hline q = -2 & 0.06 & 0.26 & 0.26 \\ \hline \end{tabular}
\end{table}
Table 3: Training, validation, and test set RMSEs for ALIGNN models trained on different charge states at A rich condition.
Figure 6: Parity plots for ALIGNN models trained on Datasets 1 + 2 + 3 + 4, for (a) E\({}^{/}\)(q=+1, E\({}_{F}\)=0), (b) E\({}^{/}\)(q=0, E\({}_{F}\)=0), and (c) E\({}^{/}\)(q=-1, E\({}_{F}\)=0), under A-rich chemical potential conditions. Parity plots for E\({}^{/}\)(q=+2, E\({}_{F}\)=0) and E\({}^{/}\)(q=-2, E\({}_{F}\)=0), as well as all parity plots for B-rich DFEs, are presented in the SI.
use these predictions as descriptors for low energy defects. Visualizing the ALIGNN-unopt vs DFT-opt values for q = +2, +1, -1, and -2, in **Fig. S8** shows that there are more cases where the unoptimized energy is unexpectedly higher than the optimized energy. This may be a factor of the charged models never encountering pristine structures, as those are typically only utilized in neutral calculations and training sets. The charged models are only trained on partially optimized structures close to the ground state, or simply the ground state defect structures, making it such that q \(\neq\) 0 ALIGNN predictions for pristine structures are less reliable than the neutral state predictions, even though the best charged models still have very low test RMSE values.
Finally, we examine the accuracy of predicting charge transition levels (CTLs) from ALIGNN compared to optimized DFT predictions. For any possible defect, a pristine unoptimized structure is created as described earlier and their DFE values are predicted at E\({}_{F}\) = 0 eV for q = 0, +2, +1, -1, and -2. Using these values, E\({}^{\prime}\) vs E\({}_{F}\) plots are produced based on the lowest energies and locations where the defect transitions from one stable charge state to another, referred to as \(\varepsilon\)(q1/q2), are identified. This would effectively yield ALIGNN-unopt values for \(\varepsilon\)(+2/+1), \(\varepsilon\)(+1/0), \(\varepsilon\)(0/-1), and \(\varepsilon\)(-1/-2), which may then be compared with corresponding DFT-opt values available for datasets 1, 2, and 3. **Fig. 7 (b)** shows the ALIGNN-unopt \(\varepsilon\)(+2/+1) plotted against the DFT-opt \(\varepsilon\)(+2/+1), revealing substantial scatter and a lack of overall correlation. Similar behavior is observed for other CTLs as well, as shown in **Fig. S9**. This is not surprising, and indicates that the relative stability of different charged defects and thus their transitions are sensitive to defect geometry and would thus require some level of optimization. Thus, we conclude that although ALIGNN-unopt E\({}^{\prime}\)(E\({}_{F}\) =0) predictions within a threshold will provide some idea about the likelihood of formation of defects in a compound and its possible defect tolerance and dopability, exact CTLs will be harder to determine without optimization, which means ALIGNN-unopt alone cannot reveal the shallow or deep level nature of important low energy defects.
### ALIGNN-based defect structure optimization
We employed our trained ALIGNN model to optimize a crystal containing point defects using an iterative distortion approach. This gradient-free optimization method entailed subjecting the initial defective crystal structure to a series of systematic atomic displacements. Each atom in the crystal was displaced based on a normal distribution, and the displacements were carefully adjusted to ensure no overall translation of the system. The magnitude of these displacements ranged from zero (representing no displacement) to a maximum value specified as a user input. The best ALIGNN models are applied to predict the E\({}^{\prime}\) of all generated crystals in all 5 q states. This procedure is iteratively repeated and the applied distortions are adjusted until the predicted DFE becomes as low as possible. This approach allows the efficient exploration of the energy landscape of the defective crystal, seeking configurations that approach the optimal structure. Another advantage
Fig. 7: (a) ALIGNN-unoptimized vs DFT-optimized E\({}^{\prime}\)(q=0, E\({}_{F}\)=0) under A-rich chemical potential conditions. (b) ALIGNN-unoptimized vs DFT-optimized \(\varepsilon\)(+2/+1) charge transition levels. All data are shown for combined datasets 1 + 2 + 3 + 4.
of this gradient-free approach is that it does not rely on explicit training for atomic forces, which significantly increases the memory cost of GNNs at both training and prediction time.
For a demonstration of this procedure, we pick two defects in the neutral state, namely Re\({}_{\text{Zn}}\) and La\({}_{\text{Zn}}\), both in ZnO. The ALIGNN-based defective structure optimization scheme is pictured in **Fig. 8(a)** and results of the optimization procedure for Re\({}_{\text{Zn}}\) and La\({}_{\text{Zn}}\) in ZnO are presented in **Fig. 8(b)** and **(c)**, respectively. After applying 644 consecutive distortions on the Re\({}_{\text{Zn}}\) geometry, the ALIGNN-predicted DFE comes down from 5.93 eV to 5.31 eV, which is very close to the DFT-optimized DFE of 5.30 eV. For La\({}_{\text{Zn}}\), applying a total of 2407 distortions helps reduce the ALIGNN-predicted DFE from 5.23 eV to 3.35 eV, which is more than 1 eV higher than the DFT-optimized DFE of 2.20 eV. ALIGNN-optimization required approximately 12 minutes for Re\({}_{\text{Zn}}\) and 40 minutes for La\({}_{\text{Zn}}\) using a standard laptop to generate defect configurations and make instant ALIGNN predictions, which is a vast improvement on the \(\sim\) 2500 core hours each DFT-optimization would require. Thus, this procedure will efficiently yield lower energy defective structures, though an exact match with DFT may not occur for every system. Finally, examining the lowest energy ALIGNN and DFT structures shows remarkable similarities between the two, as pictured in **Fig. S10**.
Next, we applied the same optimization scheme to 6 selected defects across 3 compounds, for different charge states, and plotted their ALIGNN-optimized E\({}^{f}\) vs E\({}_{F}\) in **Fig. 9** alongside the corresponding DFT-optimized plots. We find that ALIGNN-optimization produces defect formation energy plots for Co\({}_{\text{Zn}}\) and Si\({}_{\text{Zn}}\) in ZnS (II-VI semiconductor), Rh\({}_{i}\) and B\({}_{i}\) in AlP (III-V semiconductor), and Li\({}_{Si}\) and C\({}_{i}\) in Si (group IV semiconductor), that match almost perfectly with DFT DFEs for all cases. The most stable charge states, transition levels, and E\({}^{f}\) magnitudes are predicted to be very similar from both ALIGNN and DFT. **Fig. 10** further shows the CTLs for a few selected defects plotted from ALIGNN and DFT, showing both ALIGNN-unopt and ALIGNN-opt values. It can be seen that ALIGNN-optimization brings down the DFT vs ALIGNN RMSE from 0.37 eV to 0.17 eV, which is a very respectable CTL prediction error that is far better than previous composition-based models and also very commensurate
Fig. 8: (a) ALIGNN-based defect structure optimization scheme, demonstrated for Re\({}_{\text{Zn}}\) (b) and La\({}_{\text{Zn}}\) (c) in ZnO under Zn-rich chemical potential conditions.
with the DFT-experiment RMSE of 0.21 eV established for the same chemical space.
Our results demonstrate the effectiveness of GNNs in guiding crystal structure optimization and highlight their potential for accelerating materials discovery and design. It should be noted that the geometry optimization process is not very clean, and there is no clear answer on how many distortions or atomic displacements must be applied for a given defect, although the unoptimized vs optimized energy visualization provides some insight into different types of defects. It is not easy to determine when to stop the optimization process, other than when the GNN-predicted energy does not reduce anymore, which does not negate the risk of local minima trapping. This process can also get expensive when applying for hundreds of thousands of defects, especially depending on the values of hyperparameters such as r\({}_{c}\); nevertheless, they are still meaningfully faster than complete DFT optimization.
### High-throughput screening of defects
The best ALIGNN models were finally applied to predict the E\({}^{f}\) (E\({}_{F}=0\) eV) of all 12,474 possible single defects and impurities across the entire chemical space, in all 5 q states, at A-rich chemical potential conditions. These predictions were then used to generate E\({}^{f}\) vs E\({}_{F}\) plots for all defects spanning the experimental band gap of the semiconductor. To screen for potentially low energy defects, we look for E\({}^{f}\) becoming negative for any portion of the band gap, using a stringent threshold of 1 eV for neutral defects and 0 eV for charged defects. This yields a total of 1,281 defects that are very likely to be stable based on the ALIGNN-unopt predictions, though many more such low-energy defects may exist once ALIGNN-optimization is performed. **Table 4** contains a few examples of low-energy defects identified by ALIGNN. Prediction of DFE by ALIGNN for all possible 12,474 defects at the A-rich chemical potential conditions are added to the SI. This provides a great starting point for future studies and the quick identification of n-type or
Fig. 10: ALIGNN-optimized and ALIGNN-unoptimized defect charge transition levels plotted against corresponding DFT-optimized values.
Fig. 9: ALIGNN-optimized and DFT-optimized defect formation energy plots for two selected defects each in (a) ZnS under Zn-rich conditions, (b) AIP under Al-rich conditions, and (c) Si under Si-rich conditions.
p-type dopants in any compound of interest.
## 6 Conclusions
In this work, we used state-of-the-art crystal graph-based neural networks to develop predictive models for defect formation energies in a chemical space of zincblende semiconductors, by learning from a substantial computational dataset containing optimized and partially optimized geometries. Three established GNN techniques, namely GGCNN, MEGNET, and ALIGNN, are tested in this work. The ALIGNN scheme shows the best prediction performance and is capable of high-accuracy prediction for native defects, impurities, complexes, and defects in alloys. While ALIGNN predictions made on hypothetical pristine defect structures deviate significantly from DFT-optimized defect formation energies, we demonstrate an ALIGNN-based defective geometry optimization approach which helps bridge the gap and bring down errors in predicting charge transition levels. The ALIGNN-unoptimized predictions made for the entire chemical space of \(>\) 12,000 possible defects are released with this manuscript, along with necessary code and training data. We believe the DFT-GNN approach presented in this work will be highly consequential for screening across optoelectronically active point defects and functional dopants in technologically important semiconductors, even being applicable to all kinds of defect complexes. The next steps would involve developing a package to perform ALIGNN-based defect optimization, expanding the models to other semiconductor classes and higher levels of theories, and testing alternative ML and GNN approaches for further improvement.
## Conflicts of Interest
There are no conflicts to declare.
## Data Availability
All DFT data and GNN predictions are included with the SI as.csv files. All code can be found on Github: Link
## Acknowledgements
A.M.K. acknowledges support from the School of Materials Engineering at Purdue University under account number F.10023800.05.002, as well as support from Argonne National Laboratory under sub-contracts 21090590 and 22057223. A.M.K. also acknowledges insightful discussions with Dr. Mariana Bertoni at Arizona State University, Dr. Prashun Gorai at Colorado School of Mines, and Dr. Maria K.Y. Chan at Argonne National Laboratory. This research used resources from the National Energy Research Scientific Computing Center (NERSC), the Laboratory Computing Resource Center (LCRC) at Argonne National Laboratory, and the Rosen Center for Advanced Computing (RCAC) clusters at Purdue University. P.G. acknowledges IIT Madras for providing financial assistance through the "International Immersion Experience Travel Award" to visit Purdue University. Please note commercial software is identified to specify procedures. Such identification does not imply a recommendation by the National Institute of Standards and Technology (NIST).
## Author Contributions
A.M.K. conceived and planned the research project. DFT computations and GNN model training were performed by M.H.R., P.G., P.M., and A.M.K.; S.K.Y., G.P., B.D., and K.C. provided constant guidance and software support for the project and for editing the manuscript. M.H.R. and A.M.K. took the lead on writing and editing.
|
2305.20057 | Three-Way Trade-Off in Multi-Objective Learning: Optimization,
Generalization and Conflict-Avoidance | Multi-objective learning (MOL) problems often arise in emerging machine
learning problems when there are multiple learning criteria, data modalities,
or learning tasks. Different from single-objective learning, one of the
critical challenges in MOL is the potential conflict among different objectives
during the iterative optimization process. Recent works have developed various
dynamic weighting algorithms for MOL such as MGDA and its variants, where the
central idea is to find an update direction that avoids conflicts among
objectives. Albeit its appealing intuition, empirical studies show that dynamic
weighting methods may not always outperform static ones. To understand this
theory-practical gap, we focus on a new stochastic variant of MGDA - the
Multi-objective gradient with Double sampling (MoDo) algorithm, and study the
generalization performance of the dynamic weighting-based MoDo and its
interplay with optimization through the lens of algorithm stability. Perhaps
surprisingly, we find that the key rationale behind MGDA -- updating along
conflict-avoidant direction - may hinder dynamic weighting algorithms from
achieving the optimal ${\cal O}(1/\sqrt{n})$ population risk, where $n$ is the
number of training samples. We further demonstrate the impact of the
variability of dynamic weights on the three-way trade-off among optimization,
generalization, and conflict avoidance that is unique in MOL. We showcase the
generality of our theoretical framework by analyzing other existing stochastic
MOL algorithms under the framework. Experiments on various multi-task learning
benchmarks are performed to demonstrate the practical applicability. Code is
available at https://github.com/heshandevaka/Trade-Off-MOL. | Lisha Chen, Heshan Fernando, Yiming Ying, Tianyi Chen | 2023-05-31T17:31:56Z | http://arxiv.org/abs/2305.20057v3 | Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance
###### Abstract
Multi-objective learning (MOL) problems often arise in emerging machine learning problems when there are multiple learning criteria or multiple learning tasks. Recent works have developed various _dynamic weighting_ algorithms for MOL such as MGDA and its variants, where the central idea is to find an update direction that _avoids conflicts_ among objectives. Albeit its appealing intuition, empirical studies show that dynamic weighting methods may not always outperform static ones. To understand this theory-practical gap, we focus on a new stochastic variant of MGDA - the Multi-objective gradient with Double sampling (MoDo) algorithm, and study the generalization performance of the dynamic weighting-based MoDo and its interplay with optimization through the lens of algorithm stability. Perhaps surprisingly, we find that the key rationale behind MGDA - updating along conflict-avoidant direction - may _hinder_ dynamic weighting algorithms from achieving the optimal \(\mathcal{O}(1/\sqrt{n})\) population risk, where \(n\) is the number of training samples. We further demonstrate the variability of dynamic weights on the three-way trade-off among optimization, generalization, and conflict avoidance that is unique in MOL.
## 1 Introduction
Multi-objective learning (MOL) emerges frequently in recent machine learning problems such as learning under fairness and safety constraints [45]; learning across multiple tasks including multi-task learning [36] and meta-learning [43]; and, learning across multiple agents that may not share a global utility including federated learning [37] and multi-agent reinforcement learning [29].
This work considers solving the empirical version of MOL defined on the training dataset as \(S=\{z_{1},\ldots,z_{n}\}\). The performance of a model \(x\in\mathbb{R}^{d}\) on a datum \(z\) for the \(m\)-th objective is denoted as \(f_{z,m}:\mathbb{R}^{d}\mapsto\mathbb{R}\), and its performance on the entire training dataset \(S\) is measured by the \(m\)-th empirical objective \(f_{S,m}(x)\) for \(m\in[M]\). MOL optimizes the vector-valued objective, given by
\[\min_{x\in\mathbb{R}^{d}}\ \ F_{S}(x)\coloneqq[f_{S,1}(x),\ldots,f_{S,M}(x)]. \tag{1.1}\]
One natural method for solving (1.1) is to optimize the (weighted) average of the multiple objectives, also known as _static or unitary weighting_[41, 15]. However, this method may face challenges due t
to _potential conflicts_ among multiple objectives during the optimization process; e.g., conflicting gradient directions \(\langle\nabla f_{S,m}(x),\nabla f_{S,m^{\prime}}(x)\rangle<0\). A popular alternative is thus to _dynamically weight_ gradients from different objectives to avoid conflicts and obtain a direction \(d(x)\) that optimizes all objective functions jointly that we call a _conflict-avoidant_ (CA) direction. Algorithms in this category include the multi-gradient descent algorithm (MGDA) [7], its stochastic variants [26, 36, 8] and other variants [4, 24, 28, 31]. While the idea of finding CA direction in dynamic weighting-based approaches is very appealing, recent empirical studies reveal that dynamic weighting methods may not outperform static weighting in some MOL benchmarks [15, 41], especially when it involves stochastic updates and deep models. Unfortunately, the reason behind this empirical performance degradation is not fully understood and remains an open question.
To gain a deeper understanding of the dynamic weighting-based algorithms, a natural question is
**Q1:**_What are the major sources of errors in dynamic weighting-based MOL methods?_
To answer this question theoretically, we first introduce a proper measure of testing performance in MOL - the _Pareto stationary measure_ in terms of the population objectives, which will immediately imply stronger measures such as Pareto optimality under strongly convex objectives. We then decompose this measure into _generalization_ error and _optimization_ error and further introduce a new metric on the _distance to CA directions_ that is unique to MOL; see Sections 2.1 and 2.2.
To characterize the performance of MOL methods in a unified manner, we introduce a generic dynamic weighting-based MOL method that we term stochastic Multi-Objective gradient with DOuble sampling algorithm (**MoDo**), which uses a step size \(\gamma\) to control the change of dynamic weights. Roughly speaking, by controlling \(\gamma\), MoDo approximates MGDA (large \(\gamma\)) and static weighting algorithm (\(\gamma=0\)) as two special cases; see Section 2.3. We first analyze the generalization error of the model learned by MoDo through the lens of algorithmic stability [2, 12, 21] in the framework of statistical learning theory. To our best knowledge, this is the _first-ever-known_ stability analysis for MOL algorithms. Here the key contributions lie in defining a new notion of stability - MOL uniform stability and then establishing a tight upper bound (matching lower bound) on the MOL uniform stability for MoDo algorithm that involves two coupled sequences; see Section 3.1. We then analyze the optimization error of MoDo and its distance to CA directions, where the key contributions lie in relaxing _the bounded function value/gradient assumptions_ and significantly improving the convergence rate of state-of-the-art dynamic weighting-based MOL methods [8]; see Section 3.2.
Different from the stability analysis for single-objective optimization [12], the techniques used in our generalization and optimization analysis allow to remove conflicting assumptions and use larger step sizes to ensure both small generalization and optimization errors, which are of independent interest.
Given the holistic analysis of dynamic weighting methods provided in **Q1**, a follow-up question is
**Q2:**_What may cause the empirical performance degradation of dynamic weighting methods?_
_Visualizing MOL solution concepts._ To reason the root cause for this, we first compare different MOL algorithms in a toy example shown in Figure 1. We find MGDA can navigate along CA directions and converge to the empirical Pareto front under all initializations, while static weighting gets stuck in some initializations; at the same time, the empirical Pareto solution obtained by MGDA may incur a larger population risk than the suboptimal empirical solution obtained by the static weighting
Figure 1: An example from [25] with two objectives (1a and 1b) to show the three-way trade-off in MOL. Figures 0(c)-0(e) show the optimization trajectories, where the **black**\(\bullet\) marks initializations of the trajectories, colored from **red** (start) to yellow (end). The background solid/dotted contours display the landscape of the average empirical/population objectives. The gray/green bar marks empirical/population Pareto front, and the **black**\(\star\)/**green**\(\star\) marks solution to the average objectives.
method; finally, if the step size \(\gamma\) of dynamic weights is carefully tuned, MoDo can converge along CA directions to the empirical Pareto optimal solution that also generalizes well.
Aligned with this toy example, our theoretical results suggest a novel _three-way trade-off_ in the performance of dynamic weighting-based MOL algorithm; see Section 3.3. Specifically, it suggests that the step size for dynamic weighting \(\gamma\) plays a central role in the trade-off among convergence to CA direction, convergence to empirical Pareto stationarity, and generalization error; see Figure 2. In this sense, MGDA has an edge in convergence to CA direction but it could sacrifice generalization; the static weighting method cannot converge to CA direction but guarantees convergence to empirical Pareto solutions and their generalization. Our analysis also suggests that MoDo achieves a small population risk under a proper combination of step sizes and the number of iterations.
## 2 Problem Formulation and Target of Analysis
In this section, we first introduce the problem formulation of MOL, the target of analysis, the metric to measure its generalization, and then present the MGDA algorithm and its stochastic variant.
### Preliminaries of MOL
Denote the vector-valued objective function on datum \(z\) as \(F_{z}(x)=[f_{z,1}(x),\ldots,f_{z,M}(x)]\). The training and testing performance of \(x\) can then be measured by the empirical objective \(F_{S}(x)\) and the population objective \(F(x)\) which are, respectively, defined as \(F_{S}(x)\coloneqq\frac{1}{n}\sum_{i=1}^{n}F_{z_{i}}(x)\) and \(F(x)\coloneqq\mathbb{E}_{z\sim\mathcal{D}}[F_{z}(x)]\). Their corresponding gradients are denoted as \(\nabla F_{S}(x)\) and \(\nabla F(x)\in\mathbb{R}^{d\times M}\).
Analogous to the stationary solution and optimal solution in single-objective learning, we define Pareto stationary point and Pareto optimal solution for MOL problem \(\min\limits_{x\in\mathbb{R}^{d}}F(x)\) as follows.
**Definition 1** (Pareto stationary and Pareto optimal).: _If there exists a convex combination of the gradient vectors that equals to zero, i.e., there exists \(\lambda\in\Delta^{M}\) such that \(\nabla F(x)\lambda=0\), then \(x\in\mathbb{R}^{d}\) is Pareto stationary. If there is no \(x\in\mathbb{R}^{d}\) and \(x\neq x^{*}\) such that, for all \(m\in[M]\)\(f_{m}(x)\leq f_{m}(x^{*})\), with \(f_{m^{\prime}}(x)<f_{m^{\prime}}(x^{*})\) for at least one \(m^{\prime}\in[M]\), then \(x^{*}\) is Pareto optimal. If there is no \(x\in\mathbb{R}^{d}\) such that for all \(m\in[M]\), \(f_{m}(x)<f_{m}(x^{*})\), then \(x^{*}\) is weakly Pareto optimal._
By definition, at a Pareto stationary solution, there is no common descent direction for all objectives. A necessary and sufficient condition for \(x\) being Pareto stationary for smooth objectives is that \(\min_{\lambda\in\Delta^{M}}\|\nabla F(x)\lambda\|=0\). Therefore, \(\min_{\lambda\in\Delta^{M}}\|\nabla F(x)\lambda\|\) can be used as a measure of Pareto stationarity (PS) [7; 9; 39; 26; 8]. We will refer to the aforementioned quantity as the _PS population risk_ henceforth and its empirical version as _PS empirical risk_ or _PS optimization error_. We next introduce the target of our analysis based on the above definitions.
### Target of analysis and error decomposition
In existing generalization analysis for MOL, measures based on function values have been used to derive generalization guarantees in terms of Pareto optimality [5; 38]. However, for general nonconvex smooth MOL problems, it can only be guaranteed for an algorithm to converge to Pareto
Figure 2: An illustration of three-way trade-off among optimization, generalization, and conflict avoidance in the strongly convex case; \(\alpha\) is the step size for \(x\), \(\gamma\) is the step size for weights \(\lambda\), where \(o(\cdot)\) denotes a strictly slower growth rate, \(\omega(\cdot)\) denotes a strictly faster growth rate, and \(\Theta(\cdot)\) denotes the same growth rate. Arrows \(\downarrow\) and \(\uparrow\) respectively represent diminishing in an optimal rate and growing in a fast rate w.r.t. \(n\), while \(\searrow\) represents diminishing w.r.t. \(n\), but not in an optimal rate.
stationarity of the empirical objective, i.e., a small \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|\)[7, 9, 26, 8]. Thus, it is not reasonable to measure population risk in terms of Pareto optimality in this case. Furthermore, when all the objectives are convex or strongly convex, Pareto stationarity is a sufficient condition for weak Pareto optimality or Pareto optimality, respectively, as stated in Proposition 1.
**Proposition 1** ([39, Lemma 2.2]).: _If \(f_{m}(x)\) are convex or strongly-convex for all \(m\in[M]\), and \(x\in\mathbb{R}^{d}\) is a Pareto stationary point of \(F(x)\), then \(x\) is weakly Pareto optimal or Pareto optimal._
Next we proceed to decompose the PS population risk.
**Error Decomposition.** Given a model \(x\), the PS population risk can be decomposed into
\[\min_{\underbrace{\lambda\in\Delta^{M}}_{\text{PS population risk }R_{\text{pop}}(x)}}\ =\ \min_{\underbrace{\lambda\in\Delta^{M}}_{\text{A} \in\Delta^{M}}\|\nabla F(x)\lambda\|-\min_{\lambda\in\Delta^{M}}\|\nabla F_{S} (x)\lambda\|}+\min_{\underbrace{\lambda\in\Delta^{M}}_{\text{PS optimization error }R_{\text{opt}}(x)}} \tag{2.1}\]
where the optimization error quantifies the training performance, i.e., how well does model \(x\) perform on the training data; and the generalization error (gap) quantifies the difference of the testing performance on new data sampled from \(\mathcal{D}\) and the training performance, i.e., how well does the model \(x\) perform on unseen testing data compared to the training data.
Let \(A:\mathcal{Z}^{n}\mapsto\mathbb{R}^{d}\) denote a randomized MOL algorithm. Given training data \(S\), we are interested in the expected performance of the output model \(x=A(S)\), which is measured by \(\mathbb{E}_{A,S}[R_{\text{pop}}(A(S))]\). From (2.1) and linearity of expectation, it holds that
\[\mathbb{E}_{A,S}[R_{\text{pop}}(A(S))]=\mathbb{E}_{A,S}[R_{\text{gen}}(A(S))] +\mathbb{E}_{A,S}[R_{\text{opt}}(A(S))]. \tag{2.2}\]
**Distance to CA direction.** As demonstrated in Figure 1, the key merit of dynamic weighting over static weighting algorithms lies in its ability to navigate through conflicting gradients. Consider an update direction \(d=-\nabla F_{S}(x)\lambda\), where \(\lambda\) is the dynamic weights from a simplex \(\lambda\in\Delta^{M}\coloneqq\{\lambda\in\mathbb{R}^{M}\mid\mathbf{1}^{\top} \lambda=1,\;\lambda\geq 0\}\). To obtain such a steepest CA direction in unconstrained learning that maximizes the minimum descent of all objectives, we can solve the following problem [9]
\[\text{CA direction}\quad d(x) =\operatorname*{arg\,min}_{d\in\mathbb{R}^{d}}\max_{m\in[M]} \left\{\langle\nabla f_{S,m}(x),d\rangle+\frac{1}{2}\|d\|^{2}\right\} \tag{2.3a}\] \[\stackrel{{\text{equivalent to}}}{{\Longleftrightarrow }}d(x) =-\nabla F_{S}(x)\lambda^{*}(x)\ \operatorname{s.t.}\ \lambda^{*}(x)\in \operatorname*{arg\,min}_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|^{2}. \tag{2.3b}\]
Defining \(d_{\lambda}(x)=-\nabla F_{S}(x)\lambda\) given \(x\in\mathbb{R}^{d}\) and \(\lambda\in\Delta^{M}\), we measure the distance to \(d(x)\) via [8]
\[\text{CA direction error}\qquad\qquad\mathcal{E}_{\text{ca}}(x,\lambda) \coloneqq\|d_{\lambda}(x)-d(x)\|^{2}. \tag{2.4}\]
With the above definitions of measures that quantify the performance of algorithms in different aspects, we then introduce a stochastic gradient algorithm for MOL that is analyzed in this work.
### A stochastic gradient algorithm for MOL
MGDA finds \(\lambda^{*}(x)\) in (2.3b) using the full-batch gradient \(\nabla F_{S}(x)\), and then constructs \(d(x)=-\nabla F_{S}(x)\lambda^{*}(x)\), a CA direction for all empirical objectives \(f_{S,m}(x)\). However, in practical statistical learning settings, the full-batch gradient \(\nabla F_{S}(x)\) may be costly to obtain, and thus one may resort to a stochastic estimate of \(\nabla F_{S}(x)\) instead. The direct stochastic counterpart of MGDA, referred to as the stochastic multi-gradient algorithm in [26], replaces the full-batch gradients \(\nabla f_{S,m}(x)\) in (2.3b) with their stochastic approximations \(\nabla f_{z,m}(x)\) for \(z\in S\), which, however, introduces a biased stochastic estimate of \(\lambda^{*}_{t+1}\), thus a biased CA direction; see [8, Section 2.3].
To provide a tight analysis, we introduce a simple yet theoretically grounded stochastic variant of MGDA - stochastic Multi-Objective gradient with DOuble sampling algorithm (MoDo). MoDo obtains an unbiased stochastic estimate of the gradient of problem (2.3b) through double sampling and iteratively updates \(\lambda\). At each iteration \(t\), denote \(z_{t,s}\) as an independent sample from \(S\) with \(s\in[3]\), and \(\nabla F_{z_{t,s}}(x_{t})\) as a stochastic estimate of \(\nabla F_{S}(x_{t})\). MoDo updates \(x_{t}\) and \(\lambda_{t}\) as
\[\lambda_{t+1} =\Pi_{\Delta^{M}}\left(\lambda_{t}-\gamma_{t}\nabla F_{z_{t,1}}(x _{t})^{\top}\nabla F_{z_{t,2}}(x_{t})\lambda_{t}\right) \tag{2.5a}\] \[x_{t+1} =x_{t}-\alpha_{t}\nabla F_{z_{t,3}}(x_{t})\lambda_{t+1} \tag{2.5b}\]
where \(\alpha_{t},\gamma_{t}\) are step sizes, and \(\Pi_{\Delta^{M}}(\cdot)\) denotes Euclidean projection to the simplex \(\Delta^{M}\). We have summarized the MoDo algorithm in Algorithm 1 and will focus on MoDo in the subsequent analysis.
## 3 Optimization, Generalization and Three-Way Trade-Off
This section presents the theoretical analysis of the PS population risk associated with the MoDo algorithm, where the analysis of generalization error is in Section 3.1 and that of optimization error is in Section 3.2. A summary of our main results are given in Table 1.
### Multi-objective generalization and uniform stability
We first bound the expected PS generalization error by the generalization in gradients in Proposition 2, then introduce the MOL uniform stability and establish its connection to the generalization in gradients. Finally, we bound the MOL uniform stability.
**Proposition 2**.: _With \(\|\cdot\|_{\mathrm{F}}\) denoting the Frobenious norm, \(R_{\mathrm{gen}}(A(S))\) in (2.2) can be bounded by_
\[\mathbb{E}_{A,S}[R_{\mathrm{gen}}(A(S))]\leq\mathbb{E}_{A,S}[\|\nabla F(A(S))- \nabla F_{S}(A(S))\|_{\mathrm{F}}]. \tag{3.1}\]
With Proposition 2, next we introduce the concept of MOL uniform stability tailored for MOL problems and show that PS generalization error in MOL can be bounded by the MOL uniform stability. Then we analyze their bound in general nonconvex case and strongly convex case, respectively.
**Definition 2** (MOL uniform stability).: _A randomized algorithm \(A:\mathcal{Z}^{n}\mapsto\mathbb{R}^{d}\), is MOL-uniformly stable with \(\epsilon_{\mathrm{F}}\) if for all neighboring datasets \(S,S^{\prime}\) that differ in at most one sample, we have_
\[\sup_{z}\ \mathbb{E}_{A}\big{[}\|\nabla F_{z}(A(S))-\nabla F_{z}(A(S^{\prime})) \|_{\mathrm{F}}^{2}\big{]}\leq\epsilon_{\mathrm{F}}^{2}. \tag{3.2}\]
Next we show the relation between the upper bound of PS generalization error in (3.1) and MOL uniform stability in Proposition 3.
**Proposition 3** (MOL uniform stability and generalization).: _Assume for any \(z\), the function \(F_{z}(x)\) is differentiable. If a randomized algorithm \(A:\mathcal{Z}^{n}\mapsto\mathbb{R}^{d}\) is MOL-uniformly stable with \(\epsilon_{\mathrm{F}}\), then_
\[\mathbb{E}_{A,S}[\|\nabla F(A(S))-\nabla F_{S}(A(S))\|_{\mathrm{F}}]\leq 4 \epsilon_{\mathrm{F}}+\sqrt{n^{-1}\mathbb{E}_{S}\left[\mathbb{V}_{z\sim \mathcal{D}}(\nabla F_{z}(A(S)))\right]} \tag{3.3}\]
_where the variance is defined as \(\mathbb{V}_{z\sim\mathcal{D}}(\nabla F_{z}(A(S)))=\mathbb{E}_{z\sim\mathcal{D} }\big{[}\|\nabla F_{z}(A(S))-\mathbb{E}_{z\sim\mathcal{D}}[\nabla F_{z}(A(S) )]\|_{\mathrm{F}}^{2}\big{]}\)._
Proposition 3 establishes a connection between the the upper bound of the PS generalization error and the MOL uniform stability, where the former can be bounded above by the latter plus the variance
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline Assumption & Method & Optimization & Generalization & Risk & CA Distance \\ \hline \multirow{2}{*}{NC, Lip-C, Lip-S} & Static & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}\) & \(T^{\frac{1}{2}}n^{-\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \(\mathcal{K}\) \\ & Dynamic & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}+\gamma^{\frac{1}{2}}\) & \(T^{\frac{1}{2}}n^{-\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \((\gamma T)^{-1}+(\alpha\gamma)^{-\frac{1}{2}}\) \\ \hline \multirow{3}{*}{SC, Lip-S} & Static & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \(n^{-\frac{1}{2}}\) & \(\mathcal{K}\) \\ & Dynamic & \((\alpha T)^{-\frac{1}{2}}+\alpha^{\frac{1}{2}}+\gamma^{\frac{1}{2}}\) & \(\begin{cases}n^{-\frac{1}{2}},\ \gamma=\mathcal{O}(T^{-1})\\ T^{2}n^{-\frac{1}{2}},\ \mathrm{o.w.}\end{cases}\) & \(\begin{cases}n^{-\frac{1}{2}},\ \gamma=\mathcal{O}(T^{-1})\\ n^{-\frac{1}{2}},\ \mathrm{o.w.}\end{cases}\) & \(\begin{cases}n^{-\frac{1}{2}},\ \gamma(T)^{-1}+(\alpha\gamma)^{-\frac{1}{2}}\end{cases}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of optimization error, generalization error and population risk under different assumptions for static and dynamic weighting. Use “NC”, “SC” to represent nonconvex and strongly convex, and “Lip-C”, “Lip-S” to represent Lipschitz continuous and Lipschitz smooth, respectively.
of the stochastic gradient over the population data distribution. It is worth noting that the standard arguments of bounding the generalization error measured in function values by the uniform stability measured in function values [12, Theorem 2.2] is not applicable here as the summation and norm operators are not exchangeable. More explanations are given in the proof in Appendix B.1.
**Theorem 1** (MOL uniform stability and PS generalization error of MoDo in nonconvex case).: _If \(\sup_{z}\mathbb{E}_{A}\left[\|\nabla F_{z}(A(S))\|_{\mathrm{F}}^{2}\right]\leq G ^{2}\) for any \(S\), then the MOL uniform stability, i.e., \(\epsilon_{\mathrm{F}}^{2}\) in Definition 2 is bounded by \(\epsilon_{\mathrm{F}}^{2}\leq 4G^{2}T/n\). And the PS generalization error \(\mathbb{E}_{A,S}[R_{\mathrm{gen}}(A(S))]=\mathcal{O}(T^{\frac{1}{2}}n^{-\frac{ 1}{2}})\)._
Compared to the function value uniform stability upper bound in [12, Theorem 3.12] for nonconvex single-objective learning, Theorem 1 does not require a step size decay \(\alpha_{t}=\mathcal{O}(1/t)\), thus can enjoy at least a polynomial convergence rate of optimization errors w.r.t. \(T\). Combining Theorem 1 with Proposition 3, to ensure the generalization error is diminishing with \(n\), one needs to choose \(T=o(n)\), which lies in the "early stopping" regime and results in potentially large optimization error. We then provide a tighter bound in the strongly convex case that allows a larger choice of \(T\). Below we list the standard assumptions used to derive the introduced MOL stability.
**Assumption 1** (Lipschitz continuity of \(\nabla F_{z}(x)\)).: _For all \(m\in[M]\), \(\nabla f_{z,m}(x)\) is \(\ell_{f,1}\)-Lipschitz continuous for all \(z\). And \(\nabla F_{z}(x)\) is \(\ell_{F,1}\)-Lipschitz continuous in Frobenius norm for all \(z\)._
**Assumption 2**.: _For all \(m\in[M]\), \(z\in\mathcal{Z}\), \(f_{z,m}(x)\) is \(\mu\)-strongly convex w.r.t. \(x\), with \(\mu>0\)._
Note that in the strongly convex case, the gradient norm \(\|\nabla F_{z}(x)\|_{\mathrm{F}}\) can be unbounded in \(\mathbb{R}^{d}\). Therefore, one cannot assume Lipschitz continuity of \(f_{z,m}(x)\) w.r.t. \(x\in\mathbb{R}^{d}\). We address this challenge by showing that \(\{x_{t}\}\) generated by the MoDo algorithm is bounded as stated in Lemma 1. Notably, combining with Assumption 1, we can derive that the gradient norm \(\|\nabla F_{z}(x_{t})\|_{\mathrm{F}}\) is also bounded, which serves as a stepping stone to derive the MOL stability bound.
**Lemma 1** (Boundedness of \(x_{t}\) for strongly convex and smooth objectives).: _Suppose Assumptions 1, 2 hold. For \(\{x_{t}\},t\in[T]\) generated by MoDo algorithm with \(\alpha_{t}=\alpha\) and \(0\leq\alpha\leq\ell_{f,1}^{-1}\), there exists a finite positive constant \(c_{x}\) such that \(\|x_{t}\|\leq c_{x}\). And there exists finite positive constants \(\ell_{f}\), \(\ell_{F}=\sqrt{M}\ell_{f}\), such that for all \(\lambda\in\Delta^{M}\), we have \(\|\nabla F(x_{t})\lambda\|\leq\ell_{f}\), and \(\|\nabla F(x_{t})\|_{\mathrm{F}}\leq\ell_{F}\)._
With Lemma 1, the stability bound and PS generalization is provided below.
**Theorem 2** (MOL uniform stability and PS generalization error of MoDo in strongly convex case).: _Suppose Assumptions 1, 2 hold. Let \(A\) be the MoDo algorithm (Algorithm 1). For the MOL uniform stability \(\epsilon_{F}\) of algorithm \(A\) in Definition 2, if the step sizes \(\alpha_{t}\leq\alpha\leq 1/(2\ell_{f,1})\), and \(\gamma_{t}\leq\gamma\leq\min\{\frac{\mu^{2}}{120\ell_{f}^{2}\ell_{g,1}},\frac{ 1}{8(4\ell_{f}^{2}+2\ell_{g,1})}\}/T\), then it holds that_
\[\epsilon_{\mathrm{F}}^{2}\leq\frac{48}{\mu n}\ell_{f}^{2}\ell_{F,1}^{2}\Big{(} \alpha+\frac{12+4M\ell_{f}^{2}}{\mu n}+\frac{10M\ell_{f}^{4}\gamma}{\mu}\Big{)} \quad\text{and}\quad\mathbb{E}_{A,S}[R_{\mathrm{gen}}(A(S))]=\mathcal{O}(n^{- \frac{1}{2}}). \tag{3.4}\]
_And there exists functions \(F_{z}(x)\) that satisfy Assumptions 1, 2, and neighboring datasets \(S\), \(S^{\prime}\) that differ in at most one sample such that_
\[\mathbb{E}_{A}\big{[}\|\nabla F_{z}(A(S))-\nabla F_{z}(A(S^{\prime}))\|_{ \mathrm{F}}^{2}\big{]}\geq\frac{M\mu^{2}}{256n^{2}}. \tag{3.5}\]
Theorem 2 provides both upper and lower bounds for the MOL uniform stability. In this case, we choose \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=o(T^{-1})\), and \(T=\Theta(n^{2})\) to minimize the PS population risk upper bound, as detailed in Section 3.3. With this choice, the MOL uniform stability upper bound matches the lower bound in an order of \(n^{-2}\), suggesting that our bound is tight. The generalization error bound in (3.4) is a direct implication from the MOL uniform stability bound in (3.4), Propositions 2, and 3. It states that the PS generalization error of MoDo is \(\mathcal{O}(n^{-\frac{1}{2}})\), which matches the generalization error of static weighting up to a constant coefficient [19]. Our result also indicates that when all the objectives are strongly convex, choosing small step sizes \(\alpha\) and \(\gamma\) can benefit the generalization error.
### Multi-objective optimization error
In this section, we bound the multi-objective PS optimization error \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|\)[7, 26, 8]. As discussed in Section 2.2, this measure being zero implies the model \(x\) achieves a Pareto stationarity for the empirical problem.
Below we list an additional standard assumption used to derive the optimization error.
**Assumption 3** (Lipschitz continuity of \(F_{z}(x)\)).: _For all \(m\in[M]\), \(f_{z,m}(x)\) are \(\ell_{f}\)-Lipschitz continuous for all \(z\). And \(F_{z}(x)\) are \(\ell_{F}\)-Lipschitz continuous in Frobenius norm for all \(z\)._
**Lemma 2** (Distance to CA direction).: _Suppose either: 1) Assumptions 1, 3 hold; or 2) Assumptions 1, 2 hold, with \(\ell_{f}\) and \(\ell_{F}\) defined in Lemma 1. For \(\{x_{t}\},\{\lambda_{t}\}\) generated by MoDo, it holds that_
\[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A}[\|d_{\lambda_{t}}(x_{t})-d(x_{t})\|^{ 2}]\leq\frac{4}{\gamma T}+6\sqrt{M\ell_{f,1}\ell_{f}^{2}\frac{\alpha}{\gamma} }+\gamma M\ell_{f}^{4}. \tag{3.6}\]
Lemma 2 analyzes convergence to the CA direction using the measure introduced in Section 2.2. By, e.g., choosing \(\alpha=\Theta(T^{-\frac{3}{4}})\), and \(\gamma=\Theta(T^{-\frac{1}{4}})\), the RHS of (3.6) converges in a rate of \(\mathcal{O}(T^{-\frac{1}{4}})\).
**Theorem 3** (PS optimization error of MoDo).: _Suppose either: 1) Assumptions 1, 3 hold; or, 2) Assumptions 1, 2 hold, with \(\ell_{f}\) and \(\ell_{F}\) defined in Lemma 1. Define \(c_{F}\) such that \(\mathbb{E}_{A}[F_{S}(x_{1})\lambda_{1}]-\min_{x\in\mathbb{R}^{d}}\mathbb{E}_{A }[F_{S}(x)\lambda_{1}]\leq c_{F}\). Considering \(\{x_{t}\}\) generated by MoDo (Algorithm 1), with \(\alpha_{t}=\alpha\leq 1/(2\ell_{f,1})\), \(\gamma_{t}=\gamma\), then under either condition 1) or 2), it holds that_
\[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A}\Big{[}\min_{\lambda\in\Delta^{M}}\| \nabla F_{S}(x_{t})\lambda\|\Big{]}\leq\sqrt{\frac{c_{F}}{2\alpha T}}+\sqrt{ \frac{1}{2}\gamma(M\ell_{f}^{4}+2\ell_{F}^{3}\ell_{f})}+\sqrt{\frac{1}{2} \alpha\ell_{f,1}\ell_{f}^{2}}. \tag{3.7}\]
The choice of step sizes \(\alpha=\Theta(T^{-\frac{3}{4}})\), and \(\gamma=\Theta(T^{-\frac{1}{4}})\) to ensure convergence to CA direction is suboptimal in the regard of convergence to Pareto stationarity, as implied by Theorem 3, exhibiting a trade-off between convergence to the CA direction and convergence to Pareto stationarity, which will be discussed in Section 3.3.
### Optimization, generalization and conflict avoidance trade-off
Combining the results in Sections 3.1 and 3.2, we are ready to analyze and summarize the three-way trade-off of MoDo in MOL. With \(A_{t}(S)=x_{t}\) denoting the output of algorithm \(A\) at the \(t\)-th iteration, we can decompose the PS population risk \(R_{\mathrm{pop}}(A_{t}(S))\) as (cf. (2.1) and (3.1))
\[\mathbb{E}_{A,S}\big{[}R_{\mathrm{pop}}(A_{t}(S))\big{]}\leq\mathbb{E}_{A,S} \Big{[}\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(A_{t}(S))\lambda\|\Big{]}+ \mathbb{E}_{A,S}\Big{[}\|\nabla F(A_{t}(S))-\nabla F_{S}(A_{t}(S))\|_{\mathrm{ F}}\Big{]}.\]
The general nonconvex case.Suppose Assumptions 1, 3 hold. By the generalization error in Theorem 1, and the optimization error bound in Theorem 3, the PS population risk of the output of MoDo can be bounded by
\[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A,S}\big{[}R_{\mathrm{pop}}(A_{t}(S)) \big{]}=\mathcal{O}\left(\alpha^{-\frac{1}{2}}T^{-\frac{1}{2}}+\alpha^{\frac{1 }{2}}+\gamma^{\frac{1}{2}}+T^{\frac{1}{2}}n^{-\frac{1}{2}}\right). \tag{3.8}\]
Discussion of trade-off.Choosing step sizes \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=\Theta(T^{-\frac{1}{2}})\), and number of steps \(T=\Theta(n^{\frac{2}{3}})\), then the expected PS population risk is \(\mathcal{O}(n^{-\frac{1}{6}})\), which matches the PS population risk upper bound of a general nonconvex single objective in [19]. A clear trade-off in this case is between the optimization error and generalization error, controlled by \(T\). Indeed, increasing \(T\) leads to smaller optimization error but larger generalization error, and vice versa. To satisfy convergence to CA direction, it requires \(\gamma=\omega(\alpha)\) based on Lemma 2, and the optimization error in turn becomes worse, so does the PS population risk. Specifically, choosing \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=\Theta(T^{-\frac{1}{4}})\), and \(T=\Theta(n^{\frac{1}{6}})\) leads to the expected PS population risk in \(\mathcal{O}(n^{-\frac{1}{16}})\), and the distance to CA direction in \(\mathcal{O}(n^{-\frac{1}{16}})\). This shows another trade-off between conflict avoidance and optimization error.
The strongly convex case.Suppose Assumptions 1, 2 hold. By the generalization error and the optimization error given in Theorems 2 and 3, MoDo's PS population risk can be bounded by
\[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{A,S}\big{[}R_{\mathrm{pop}}(A_{t}(S)) \big{]}=\mathcal{O}\left(\alpha^{-\frac{1}{2}}T^{-\frac{1}{2}}+\alpha^{\frac{1 }{2}}+\gamma^{\frac{1}{2}}+n^{-\frac{1}{2}}\right). \tag{3.9}\]
Discussion of trade-off.Choosing step sizes \(\alpha=\Theta(T^{-\frac{1}{2}})\), \(\gamma=o(T^{-1})\), and number of steps \(T=\Theta(n^{2})\), we have the expected PS population risk in gradients is \(\mathcal{O}(n^{-\frac{1}{2}})\). However, choosing \(\gamma=o(T^{-1})\) leads to large distance to the CA direction according to Lemma 2 because the term \(\frac{4}{\gamma T}\) in (3.6) increases with \(T\). To ensure convergence to the CA direction, it requires \(\gamma=\omega(T^{-1})\), under which the tighter bound in Theorem 2 does not hold but the bound in Theorem 1 still holds. In this case, the PS population risk under proper choice of \(\alpha,\gamma,T\) is \(\mathcal{O}(n^{-\frac{1}{2}})\) as discussed in the previous paragraph. Therefore, to avoid conflict of gradients, one needs to sacrifice the sample complexity of PS population risk, demonstrating a trade-off between conflict avoidance and PS population risk.
## 4 Related Works
**Multi-task learning (MTL).** MTL, as one application of MOL, leverages shared information among different tasks to train a model that can perform multiple tasks. MTL has been widely applied to natural language processing, computer vision and robotics [13], [34], [46], [40]. From the optimization perspective, a simple method for MTL is to take the weighted average of the per-task losses as the objective. The weights can be static or dynamic during optimization. And weighting for different tasks can be chosen based on different criteria such as uncertainty [14], gradient norms [4] or task difficulty [11]. These methods are often heuristic and designed for specific applications. Another line of work tackles MTL through MOL [36; 44; 25; 10]. A foundational algorithm in this regard is MGDA [7], which takes dynamic weighting of gradients to obtain a CA direction for all objectives. Stochastic versions of MGDA has been proposed in [26; 48; 8]. Algorithms for finding a set of Pareto optimal models rather than one have been proposed in [27; 24; 28; 31; 23; 17; 42; 47; 30].
**Theory of MOL.** Convergence analysis for the deterministic MGDA algorithm has been provided in [9]. One challenge of analyzing stochastic MGDA is the biased estimate of the weighting coefficients during optimization. This can be overcome by increasing the batch size during optimization [26], or using a momentum-based optimization method [48; 8], or performing double-sampling to update weighting as in this work. While the community has a rich history of investigating the convergence of MOL algorithms, their generalization guarantee remains open. Not until recently, generalization guarantees for multi-objective learning were theoretically analyzed. In [5], a min-max formulation to solve the MOL problem is analyzed, where the weights are chosen based on the maximum function values, rather than the CA direction. And generalization guarantees are provided based on Rademacher complexity of the hypothesis class of the learner. More recently, [38] provide generalization guarantees for MOL for a more general class of weighting, also based on Rademacher complexity. Different from these works, we study algorithm-dependent generalization bounds based on algorithm stability to gain a better understanding of how the choice of algorithm hyperparameters such as step size and number of iterations affects the generalization error.
**Algorithm stability and generalization.** Stability analysis dates back to the work [6] in 1970s. Uniform stability and its relationship with generalization were studied in [2] for the exact minimizer of the ERM problem with strongly convex objectives. The work [12] pioneered the stability analysis for stochastic gradient descent (SGD) algorithms with convex and smooth objectives. The results were extended and refined in [16] with data-dependent bounds, in [3; 33; 20] for non-convex objectives, and in [1; 21] for SGD with non-smooth and convex losses. However, all these studies mainly focus on single-objective learning problems. To our best knowledge, there is no existing work on the stability and generalization analysis for multi-objective learning problems and our results on its stability and generalization are the first-ever-known ones.
## 5 Experiments
In this section, we conduct experiments to further demonstrate the three-way trade-off among the optimization, generalization, and conflict avoidance of MOL algorithms on various MOL tasks.
### Synthetic experiments
Our theory in the strongly convex case are verified through a synthetic experiment in Figure 3. Details are described in Appendix D.1. Figure 2(a) shows the PS optimization error and PS population risk, as well as the distance to CA direction decreases as \(T\) increases, which corroborates Lemma 2, and Theorem 3. In addition, the generalization error in this case does not vary much with \(T\), verifying Theorems 2 and 2. In Figure 2(b), the optimization error first decreases and then increases as \(\alpha\) increases, which is consistent with Theorem 3. Notably, we observe a threshold for \(\alpha\) below which the distance to CA direction converges even when the optimization error does not converge, while
beyond which the distance to the CA direction becomes larger, verifying Lemma 2. Additionally, Figure 2(c) demonstrates that increasing \(\gamma\) enlarges the PS optimization error, PS generalization error, and thus the PS population risk, but decreases the distance to CA direction, which supports Lemma 2.
### Image classification experiments
We further verify our theory in the nonconvex case on MNIST [18] image classification using a multi-layer perceptron (MLP) model and three objective functions: cross-entropy, mean square error (MSE), and Huber loss. Implementation details are provided in Appendix D.2. Following Section 2.2, we evaluate the performance in terms of \(R_{\text{pop}}(x)\), \(R_{\text{opt}}(x)\), \(R_{\text{gen}}(x)\), and \(\mathcal{E}_{\text{ca}}(x,\lambda)\). The exact PS population risk is not accessible without the true data distribution. To approximate the PS population risk \(R_{\text{pop}}\), we evaluate \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S_{\text{bc}}}(x)\lambda\|\) on the testing data set \(S_{\text{bc}}\) that is independent of training data set \(S\). The PS optimization error \(R_{\text{opt}}(x)\) is obtained by \(\min_{\lambda\in\Delta^{M}}\|\nabla F_{S}(x)\lambda\|\), and the PS generalization error \(R_{\text{gen}}(x)\) is obtained by \(R_{\text{pop}}(x)-R_{\text{opt}}(x)\).
We analyze how different choices of \(T\), \(\alpha\), and \(\gamma\) in MoDo affect the three-way trade-off in Figure 4 (averaged over 10 random seeds with half standard deviation error bars). Figure 3(a) shows that optimization error reduces while generalization error increases with \(T\), aligning with the stability bound in Theorem 1 and optimization error bound in Theorem 3, and shows the need for early stopping in non-convex settings to improve generalization. Furthermore, the CA direction error reduces with increasing \(T\), which aligns with Lemma 2. Figure 3(b) shows that for different \(\alpha\), the PS optimization error and population risk initially decrease and then increase as \(\alpha\) increases which aligns with Theorem 3 and (3.8). On the other hand, there is an overall increase in CA direction error with \(\alpha\), which aligns with Lemma 2. Figure 3(c) shows that increasing \(\gamma\) increases the PS population and optimization errors but decreases CA direction error. This matches our bound for PS optimization error in Theorem 3, PS population risk in (3.8), and Lemma 2 for small \(\gamma\) values.
Finally, we compare the performance of MoDo with static weighting in multi-task image classification on Office-31 [35] multi-domain dataset with 3 objectives in Table 2. Hyperparameters of all methods
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & Amazon & DSLR & Webcam & \multirow{2}{*}{\(\Delta\mathcal{A}\%\downarrow\)} \\ \cline{2-2} \cline{4-5} & Test Acc \(\uparrow\) & Test Acc \(\uparrow\) & Test Acc \(\uparrow\) \\ \hline Static & 84.62 \(\pm\) 0.71 & 94.43 \(\pm\) 0.96 & 97.44 \(\pm\) 1.20 & 2.56 \(\pm\) 0.37 \\ MGDA & 79.45 \(\pm\) 0.11 & **96.56 \(\pm\) 1.20** & **97.89 \(\pm\) 0.74** & 3.65 \(\pm\) 0.64 \\
**MoDo** & **85.13 \(\pm\) 0.58** & 95.41 \(\pm\) 0.98 & 96.78 \(\pm\) 0.65 & **2.26 \(\pm\) 0.31** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification results on Office-31 dataset.
Figure 4: Optimization, generalization, and CA direction errors of MoDo for MNIST image classification under different \(T\), \(\alpha\), and \(\gamma\). The default parameters are \(T=1000\), \(\alpha=0.1\), and \(\gamma=0.01\).
Figure 3: Optimization, generalization, and CA direction errors of MoDo in the strongly convex case under different \(T\). \(\alpha\). \(\gamma\). The default parameters are \(T=100\). \(\alpha=0.01\), \(\gamma=0.001\).
are chosen based on the best validation performance. Results show that MoDo achieves comparable performance with static weighting and MGDA, if not better, under proper hyperparameters. Here \(\Delta A\%\) denotes average performance degradation compared to dedicated single-task learners. Additional experimental results and details are presented in Appendix D.2 due to space limitations.
## 6 Conclusions
This work studies the three-way trade-off in MOL - among optimization, generalization, and conflict avoidance. Our results show that, in the general nonconvex setting, the traditional trade-off between optimization and generalization depending on the number of iterations also exists in MOL. Moreover, dynamic weighting algorithms like MoDo introduce a new dimension of trade-off in terms of conflict avoidance compared to static weighting. We demonstrate that this three-way trade-off can be controlled by the step size for updating the dynamic weighting parameter and the number of iterations. Proper choice of these parameters can lead to decent performance on all three metrics.
## Broader impacts and limitations
This work has potential impact on designing dynamic weighting algorithms and choosing hyperparameters such as step sizes and number of iterations based on the trade-off for MTL applications such as multi-language translation, multi-agent reinforcement learning, etc. No ethical concerns arise from this work. A limitation of this study is that the theory focuses on a specific algorithm, MoDo, for smooth objectives in unconstrained learning. Future research could explore the theory of other algorithms for non-smooth objectives or in constrained learning, which would be interesting directions to pursue.
|
2309.16630 | On Learning with LAD | The logical analysis of data, LAD, is a technique that yields two-class
classifiers based on Boolean functions having disjunctive normal form (DNF)
representation. Although LAD algorithms employ optimization techniques, the
resulting binary classifiers or binary rules do not lead to overfitting. We
propose a theoretical justification for the absence of overfitting by
estimating the Vapnik-Chervonenkis dimension (VC dimension) for LAD models
where hypothesis sets consist of DNFs with a small number of cubic monomials.
We illustrate and confirm our observations empirically. | C. A. Jothishwaran, Biplav Srivastava, Jitin Singla, Sugata Gangopadhyay | 2023-09-28T17:35:26Z | http://arxiv.org/abs/2309.16630v1 | # On Learning with LAD
###### Abstract
The logical analysis of data, LAD, is a technique that yields two-class classifiers based on Boolean functions having disjunctive normal form (DNF) representation. Although LAD algorithms employ optimization techniques, the resulting binary classifiers or binary rules do not lead to overfitting. We propose a theoretical justification for the absence of overfitting by estimating the Vapnik-Chervonenkis dimension (VC dimension) for LAD models where hypothesis sets consist of DNFs with small number of cubic monomials. We illustrate and confirm our observations empirically.
**Keywords:** Boolean functions, PAC learning, VC dimension, logical analysis of data.
## 1 Introduction
Suppose we have a collection of observations for a particular phenomenon in the form of data points and the information about its occurrence at each data point. We refer to such a data set as the _training set_. Data points are (feature) vectors whose coordinates are values of variables called _features_. The information on the occurrence or non-occurrence of the phenomenon under consideration can be recorded by labeling each data point as a "false" point or a "true" point, alternatively, by 0 or 1, respectively. Peter L. Hammer [1] proposed using partially defined Boolean functions to explore the cause-effect relationship of a data point's membership in the set of "true" points or "false" points. Crama et al. [2] developed this theory and named it the _Logical Analysis of Data_, or LAD for short. Another noteworthy survey article is by Alexe et al. [3], where
the authors discuss LAD in detail and focus on using LAD for biomedical data analysis.
Here, we consider LAD in the Probably Approximately Correct (PAC) learning model framework. We denote the hypothesis set by \(\mathcal{H}\). We restrict \(\mathcal{H}\) to the set of Disjunctive Normal Forms (DNFs) involving a small number of cubic terms and estimate the Vapnik-Chervonenkis (VC) dimension for the hypothesis set. Recently, Chauhan et al. [4] compared LAD with DNN and CNN for analyzing intrusion detection data sets. It was observed that LAD with low-degree terms (cubic and degree four) offer classifiers that outperform DNN or CNN classifiers. In this article, we theoretically explain why we can expect to learn from data using LAD is possible by solely checking the accuracy of the proposed Boolean classifiers within the training set.
## 2 Partially defined Boolean functions and logical analysis of data
Let \(\mathbb{Z}\) be the ring of integers, and \(\mathbb{Z}^{+}\) be the set of positive integers. Consider the set \(\mathcal{B}=\{0,1\}\). For any \(u,v\in\mathcal{B}\), not necessarily distinct, we define disjunction, conjunction, and negation as \(u\lor v=u+v-uv\), \(u\wedge v=uv\), and \(\bar{u}=1-u\), respectively, where the operations on the right-hand side are over \(\mathbb{Z}\). It is customary to write \(uv\) instead of \(u\wedge v\). The set \(\mathcal{B}=\{0,1\}\) along with these operations is a Boolean algebra. For \(n\in\mathbb{Z}^{+}\), let \([n]=\{1,\ldots,n\}\subset\mathbb{Z}^{+}\). The cartesian product of \(n\) copies of \(\mathcal{B}\) is \(\mathcal{B}^{n}=\{\mathbf{x}=(x_{1},\ldots,x_{n}):x_{i}\in\mathcal{B},i\in[n]\}\). The set \(\mathcal{B}^{n}\) is a Boolean algebra where disjunction, conjunction, and negation are induced from those defined over \(\mathcal{B}\) as: \(\mathbf{x}\vee\mathbf{y}=(x_{1}\lor y_{1},\ldots,x_{n}\lor y_{n})\), \(\mathbf{x}\wedge\mathbf{y}=(x_{1}\wedge y_{1},\ldots,x_{n}\wedge y_{n})\), and \(\bar{\mathbf{x}}=(\bar{x}_{1},\ldots,\bar{x}_{n})\), for all \(\mathbf{x},\mathbf{y}\in\mathcal{B}^{n}\).
Let the set of all functions from a set \(\mathcal{X}\) to a set \(\mathcal{Y}\) be denoted by \(\mathcal{F}^{\mathcal{X},\mathcal{Y}}\). In this paper, \(\mathcal{X}=\mathcal{B}^{n}\) and \(\mathcal{Y}=\mathcal{B}\). A function \(f\in\mathcal{F}^{\mathcal{B}^{n},\mathcal{B}}\) is said to be an \(n\)-variable Boolean function. The support or the set of true points of \(f\) is \(T(f)=\{\mathbf{x}\in\mathcal{B}^{n}:f(\mathbf{x})=1\}\), and the set of false points is \(F(f)=\{\mathbf{x}\in\mathcal{B}^{n}:f(\mathbf{x})=0\}\). An \(n\)-variable Boolean function can be completely defined by the ordered pair of sets \((T(f),F(f))\). Clearly, \(T(f)\cup F(f)=\mathcal{B}^{n}\) and \(T(f)\cap F(f)=\emptyset\). Hammer [1] proposed the notion of partially defined Boolean functions as follows.
**Definition 2.1**.: _Let \(T,F\subseteq\mathcal{B}^{n}\) such that \(T\cap F=\emptyset\). Then \((T,F)\) is said to be a partially defined Boolean function, or pdBf, in \(n\) variables._
For a pdBf \((T,F)\), it is understood that \(T\cup F\neq\mathcal{B}^{n}\), otherwise the pdBf \((T,F)\) is a Boolean function. For studying Boolean functions and their various applications, we refer to [5].
This paper considers two-class classification problems with feature vectors in \(\mathcal{B}^{n}\). For a positive integer \(N\), consider a random sample of \(\mathcal{S}=\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\}\subseteq\mathcal{B}^ {n}\) of size \(N\). Let the label corresponding to the \(\mathbf{x}^{(i)}\) be denoted by \(y^{(i)}\in\mathcal{B}\) for all \(i\in[N]\). The vectors belonging to \(\mathcal{S}\), each augmented with its binary label, form the training set \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)}):i\in[N]\}\). The sets
\[T_{\mathcal{D}}=\{\mathbf{x}^{(i)}:y^{(i)}=1,i\in[N]\},\text{ and }F_{ \mathcal{D}}=\{\mathbf{x}^{(i)}:y^{(i)}=0,i\in[N]\}\]
are said to be the sets of positive and negative examples, respectively. The pair of subsets \((T_{\mathcal{D}},F_{\mathcal{D}})\) is a partially defined Boolean function over \(\mathcal{B}^{n}\).
**Definition 2.2**.: _A Boolean function \(f:\mathcal{B}^{n}\to\mathcal{B}\) is an extension of a pdBf \((T,F)\), if \(T\subseteq T(f)\) and \(F\subseteq F(f)\)._
LAD uses the pdBf \((T_{\mathcal{D}},F_{\mathcal{D}})\) corresponding to a training set \(\mathcal{D}\) and proposes its extension as an approximation of the target function. Researchers have demonstrated that such extensions, when carefully constructed using particular conjunctive rules, provide excellent approximations of target functions. Boros et al. (2016, page 34, line 7) call them classifiers based on the "most justifiable" rules and further state that these "rules do not seem to lead to overfitting, even though it (the process of finding them) involves an element of optimization." In this paper, we prove this observation within the framework of the PAC learning model. Before proceeding further, we introduce some definitions and notations to describe our results.
A Boolean variable is a variable that can take values from the set \(\mathcal{B}\). Let \(x\) be a Boolean variable. We associate a Boolean variable, \(\bar{x}\), with \(x\) such that for all \(x\in\mathcal{B}\), \(x\bar{x}=0\) and \(x\vee\bar{x}=1\). The symbol \(x^{\alpha}\) is defined by
\[x^{\alpha}=\begin{cases}x&\text{if }\alpha=1\\ \bar{x}&\text{if }\alpha=0.\end{cases}\]
The symbol \(x^{\alpha}\) is said to be a literal.
A LAD algorithm outputs a collection of prime patterns that maximally cover the true points of the pdBf \((T,F)\) obtained from the training set \(\mathcal{D}\). For the technical details, we refer to [1; 2; 3; 5] and other related research results. In this paper, we do not focus on developing efficient algorithms to obtain theories and testing for how accurately they approximate a target function. Instead, we aim to establish the conditions that make learning by Boolean rules feasible. In other words, we would like to understand why we do not usually see overfitting even if the LAD algorithms are designed to maximally fit a theory with the training set data. We propose to do this analysis by using the PAC learning model.
## 3 The PAC learning model
Valiant [7; 8] proposed the theory of Probably Approximately Correct (PAC) in 1984. For an introduction to the concept of the VC dimension, we refer to Abu-Mostafa et al. [9]. Let us denote the set of all possible feature vectors and labels by \(\mathcal{X}=\mathcal{B}^{n}\) and \(\mathcal{Y}=\mathcal{B}\), respectively. We assume that for each phenomenon there is a _target function_\(f\in\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) that correctly labels all the vectors in \(\mathcal{X}\). We consider training sets with binary features and labels of the form \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)}):i\in[N]\}\) where each \(\mathbf{x}^{(i)}\in\mathcal{B}^{n}\) and \(y^{(i)}\in\mathcal{B}\) are data points and binary labels, respectively. By definition, the target function \(f\in\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) satisfies \(f(\mathbf{x}^{(i)})=y^{(i)}\), for all \(i\in[N]\). Let \(\mathcal{H}\subset\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) be a set of
functions called the hypothesis set. The PAC learning involves approximating the target function \(f\in\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) by a function \(h\in\mathcal{H}\subset\mathcal{F}^{\mathcal{X},\mathcal{Y}}\) such that it has the lowest average error for points inside and outside the training set \(\mathcal{D}\). The hypothesis set ought to be carefully chosen and fixed before the execution of a learning algorithm over a training set.
**Definition 3.1**.: _The in-sample error is the fraction of data points in \(\mathcal{D}\) where the target function \(f\) and \(h\in\mathcal{H}\) disagree. That is,_
\[E_{\mathrm{in}}(h)=\frac{1}{N}\sum_{i\in[N]}\#\{\mathbf{x}^{(i)}:h(\mathbf{x}^ {(i)})\neq f(\mathbf{x}^{(i)})\}. \tag{1}\]
It is realistic to assume that the input space \(\mathcal{X}\) has a probability distribution \(\mu\) defined on it. For an input \(\mathbf{x}\) chosen from this space satisfying the probability distribution \(\mu\), we write \(\mathbf{x}\sim\mu\). The out-of-sample error is the probability that \(h(\mathbf{x})\neq f(\mathbf{x})\) when \(\mathbf{x}\sim\mu\).
**Definition 3.2**.: _The out-of-sample error is_
\[E_{\mathrm{out}}(h)=\Pr_{\mathbf{x}\sim\mu}[h(\mathbf{x})\neq f(\mathbf{x})]. \tag{2}\]
Learning is feasible if the learning algorithm can produce a function \(g\in\mathcal{H}\) such that the in-sample error is close enough to the out-of-sample error asymptotically with increasing sample size \(N\), and \(E_{\mathrm{in}}(g)\) is sufficiently small.
We introduce the notions of the growth function and Vapnik-Chervonenkis dimension to explore the feasibility of learning using LAD.
**Definition 3.3**.: _Let \(\mathcal{H}\) be a hypothesis set for the phenomenon under consideration. For any \(h\in\mathcal{H}\) and \(N\) points \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{X}\), the \(N\)-tuple \((h(\mathbf{x}^{(1)}),\ldots,h(\mathbf{x}^{(N)}))\) is said to be a dichotomy._
The set of dichotomies generated by \(\mathcal{H}\) on the points \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{X}\) is \(\mathcal{H}(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})=\{(h(\mathbf{x}^{(1)}), \ldots,h(\mathbf{x}^{(N)})):h\in\mathcal{H}\}\). If \(\mathcal{H}\) is capable of generating all possible dichotomies on \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\), i.e., \(\mathcal{H}(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})=\mathcal{B}^{N}\), we say that \(\mathcal{H}\)_shatters_ the set \(\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\}\).
**Definition 3.4**.: _The growth function for a hypothesis set \(\mathcal{H}\) is_
\[m_{\mathcal{H}}(N)=\max\left\{\left|\mathcal{H}(\mathbf{x}^{(1)},\ldots, \mathbf{x}^{(N)})\right|:\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{ B}^{n}\right\}. \tag{3}\]
The growth function \(m_{\mathcal{H}}(N)\leq 2^{N}\) since for any \(\mathcal{H}\) and \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathcal{B}^{n}\), the set \(\mathcal{H}(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})\subseteq\mathcal{B}^{N}\). The Vapnik-Chervonenkis dimension, i.e., the VC dimension, of a hypothesis set \(\mathcal{H}\) is defined as follows.
**Definition 3.5**.: _The Vapnik-Chervonenkis dimension of a hypothesis set \(\mathcal{H}\), denoted by \(d_{\mathrm{vc}}(\mathcal{H})\), or \(d_{\mathrm{vc}}\), is the largest value of \(N\) for which \(m_{\mathcal{H}}(N)=2^{N}\). If \(m_{\mathcal{H}}(N)=2^{N}\) for all \(N\), then \(d_{\mathrm{vc}}=\infty\)._
The following inequality provides an upper bound for the growth function as a function of the VC dimension and the sample size.
\[m_{\mathcal{H}}(N)\leq\sum_{i=0}^{d_{\mathrm{vc}}}\binom{N}{i} \tag{4}\]
Finally, we state the VC generalization bound.
**Theorem 3.1** (Theorem 2.5, page 53, [9]): _For any tolerance \(\delta>0\),_
\[E_{\mathrm{out}}(g)\leq E_{\mathrm{in}}(g)+\sqrt{\frac{8}{N}\ln\frac{4m_{ \mathcal{H}}(2N)}{\delta}} \tag{5}\]
_with probability \(\geq 1-\delta\)._
## 4 LAD as a PAC learning model
Suppose the data points in our training set \(\mathcal{D}\) involve \(n\) binary features for some positive integer \(n\). We use Boolean functions defined on \(\mathcal{B}^{n}\) to learn from such a training set. First, we consider the hypothesis set \(\mathcal{H}_{n}\) consisting of all cubic monomials in \(n\) binary variables. That is
\[\mathcal{H}_{n}=\{x_{i}^{\alpha_{i}}x_{j}^{\alpha_{j}}x_{k}^{\alpha_{k}}: \alpha_{i},\alpha_{j},\alpha_{k}\in\{0,1\},i<j<k,\text{ for all }i,j,k\in[n]\}. \tag{6}\]
The following theorem estimates the VC dimension of \(\mathcal{H}_{n}\).
**Theorem 4.1**: _Let \(\mathcal{H}_{n}\) be the hypothesis set consisting of cubic monomials. Then the VC dimension_
\[d_{\mathrm{vc}}(\mathcal{H}_{n})=\Theta(\log_{2}n). \tag{7}\]
Suppose \(\mathcal{S}\subset\mathcal{B}^{n}\) contains \(N\) vectors denoted by
\[\mathbf{b}^{(1)} =(b_{1}^{(1)},b_{2}^{(1)},b_{3}^{(1)},\ldots,b_{n}^{(1)})\] \[\mathbf{b}^{(2)} =(b_{1}^{(2)},b_{2}^{(2)},b_{3}^{(2)},\ldots,b_{n}^{(2)})\] \[\cdots \cdots \cdots\] \[\mathbf{b}^{(N)} =(b_{1}^{(N)},b_{2}^{(N)},b_{3}^{(N)},\ldots,b_{n}^{(N)})\]
We set \(b_{1}^{(i)}=b_{2}^{(i)}=1\) for all \(i\in[N]\). The vector corresponding to the binary representation of the non-negative integer \(m\), where \(0\leq m\leq 2^{N}-1\), is denoted by \(\mathbf{y}^{(m)}=(y_{1}^{(m)},\ldots,y_{N}^{(m)})\). Our aim is to construct \(\mathcal{S}\) such that there exist \(2^{N}\) cubic monomials in \(\mathcal{H}_{n}\) each generating a distinct vector in \(\mathcal{B}^{N}\) as the restriction of its truth table on \(\mathcal{S}\).
The vectors \(\mathbf{y}^{(0)}\) and \(\mathbf{y}^{(2^{N}-1)}\) are generated by the monomials \(x_{1}x_{2}x_{3}\) and \(x_{1}x_{2}\overline{x}_{3}\), if we set \(b_{3}^{(i)}=y_{i}^{(0)}\), for all \(i\in[N]\). We note that \(y_{i}^{(0)}=0\), for all \(i\in[N]\). For each non-negative integer \(m\) where \(0\leq m\leq 2^{N}-1\), let \(\overline{m}\) be the integer in the same interval that satisfies the condition \(\mathbf{y}^{\overline{m}}=\overline{\mathbf{y}}^{(m)}\). If we set \(b_{m}^{(i)}=y_{i}^{(m)}\)
for all \(i\in[N]\), the restrictions of the monomials \(x_{1}x_{2}x_{m}\) and \(x_{1}x_{2}\overline{x}_{m}\) of the set \(\mathcal{S}\) are \(\mathbf{y}^{(m)}\) and \(\overline{\mathbf{y}}^{(m)}=\mathbf{y}^{(\overline{m})}\), respectively. Therefore, if \(n=2+2^{N-1}\), the hypothesis set \(\mathcal{H}_{n}\) shatters a sample of size \(N\). This means that if \(n=2+2^{N-1}\), the VC dimension of \(d_{\mathrm{vc}}(\mathcal{H}_{n})\) satisfies \(n=2+2^{N-1}\leq 2+2^{d_{\mathrm{vc}}(\mathcal{H}_{n})-1}.\) Taking logarithm on both sides
\[d_{\mathrm{vc}}(\mathcal{H}_{n})\geq\lfloor\log_{2}(n-2)+1\rfloor. \tag{8}\]
Since the number of distinct cubic monomials is \(2^{3}\times\binom{n}{3}\) we have \(2^{d_{\mathrm{vc}}(\mathcal{H}_{n})}\leq 2^{3}\times\binom{n}{3}\), that is
\[d_{\mathrm{vc}}(\mathcal{H}_{n})\leq\log_{2}(2^{3}\times\binom{n}{3})=3+\log_ {2}(\frac{n(n-1)(n-2)}{3!}). \tag{9}\]
Combining (8) and (9) we have \(d_{\mathrm{vc}}(\mathcal{H}_{n})=\Theta(\log_{2}n)\). \(\Box\)
We conjecture that \(d_{\mathrm{vc}}(\mathcal{H}_{n})=\lfloor\log_{2}(n-2)+1\rfloor\). Our experimental observations in the next section support our conjecture. Restricting to the asymptotic analysis we obtain the bounds for a larger class of functions.
**Theorem 4.2**.: _Let \(\mathcal{H}_{n}^{(t)}\) be the hypothesis set containing exclusively the DNFs consisting of \(t\) cubic terms in \(n\) binary variables where \(t\leq n/3\). Then_
\[d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Theta(t\log_{2}n). \tag{10}\]
**Proof.** Let \(A(n;3)=2^{3}\times\binom{n}{3}\), the number of cubic monomials in \(n\) binary variables. The number of DNFs in \(\mathcal{H}_{n}^{(t)}\) with \(t\) terms is \(B(n;t,3)=\binom{A(n;3)}{t}\). Since \(A(n;3)=2^{3}\times\binom{n}{3}=\Theta(n^{3})\),
\[B(n;t,3)=\frac{A(n;3)(A(n;3)-1)\ldots(A(n;3)-t+1)}{t!}=\Theta(n^{3t}). \tag{11}\]
The VC dimension \(d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})\) satisfies \(2^{d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})}\leq B(n;t,3)=\Theta(n^{3t})\). Therefore,
\[d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})\leq O(t\log_{2}n). \tag{12}\]
Since \(t\leq n/3\), there are \(t\) mutually exclusive subsets of binary variables each of size three. The lower bound (8) obtained in Theorem 4.1 implies
\[d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Omega(t\log_{2}n). \tag{13}\]
Combining (12) and (13) we have \(d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Theta(t\log_{2}n)\). \(\Box\)
The significance of Theorem 4.2 is that if we have a data set with \(n\) features, we are assured that the VC dimension \(d_{\mathrm{vc}}(\mathcal{H}_{n}^{(t)})=\Theta(t\log_{2}n)\). Therefore, we can start learning from this data set using samples of size \(\Theta(t\log_{2}(n))\). Furthermore, the upper bound given in (4) implies that if the VC dimension is finite then the growth function \(m_{\mathcal{H}}(N)=O(N^{d_{\mathrm{vc}}})\). Therefore, by (5)
\[E_{\mathrm{out}}(g)-E_{\mathrm{in}}(g)\leq\sqrt{\frac{8}{N}\ln\frac{4m_{ \mathcal{H}}(2N)}{\delta}}\leq\sqrt{\frac{8}{N}\ln\frac{k(2N)^{d_{\mathrm{vc}} }}{\delta}} \tag{14}\]
for some positive constant \(k\). This implies that for a hypothesis class with a finite VC dimension, the in-sample error is an accurate approximation of the out-of sample error for large enough training size, \(N\). As mentioned in (9, page 56), choosing \(N=10\times d_{\text{vc}}\) yields a good enough generalization to the out-of-sample error from the in-sample error.
## 5 Experimental results
Since LAD attempts to approximate a pdf, we are considering the approximation of a random Boolean function using cubic Boolean monomials. In particular, we are considering the approximation of a Boolean function \(f:\mathcal{B}^{10}\to\mathcal{B}\) using the hypothesis class \(\mathcal{H}_{10}\) as defined in (6).
We conducted an experiment wherein we chose 100 random Boolean functions. For each function \(f\), 50 training sets were sampled as training data sets from the truth table of the Boolean function where each training set was of size \(N\). Hypotheses in \(\mathcal{H}_{10}\) that corresponded to the lowest value of \(E_{\text{in}}\) for each training sample were considered as suitable candidates for approximating \(f\). The corresponding \(E_{\text{out}}\) was calculated from the entire truth table. The algorithm of the experiment is as follows:
```
1: Generate a random Boolean function \(f:\mathcal{B}^{10}\to\mathcal{B}\) as truth table
2: Sample \(f\) uniformly at random to collect \(N\) samples
3: Calculate the in-sample error on N samples according to Equation 1 for all functions in \(\mathcal{H}_{10}\)
4: Identify the hypothesis function \(g\) with lowest \(E_{\text{in}}(g)\).
5: Calculate \(E_{\text{out}}(g)\), from the truth tables of \(f\) and \(g\).
6: Store the values of in-sample and out-of-sample errors.
7: Go to Step 2: repeat 50 times
8: Go to Step 1: repeat 100 times
9: Plot a histogram to observe the variation in \(E_{\text{out}}(g)-E_{\text{in}}(g)\)
```
**Algorithm 1** Algorithm for the experiment.
If in Step 4 of the above algorithm, there are multiple functions having minimum \(E_{\text{in}}\), then all of them are considered for the following step. This was observed to be the case in almost all instances.
The experiment described in Algorithm 1 was initially repeated for values around \(N=4\). The reason for this choice is because our conjectured VC dimension of \(\mathcal{H}_{10}\) is given by \(\lfloor\log_{2}(10-2)+1\rfloor=4\), and the given values will enable us to observe the connection between VC dimension and the extent to which learning is possible in the given experiment.
The same experiment was then run for values \(N=10,20,40,60\); this was done to observe the relation between \(E_{\text{out}}\) and \(E_{\text{in}}\) in the \(10\times d_{\text{vc}}\) limit and to confirm if the in-sample error is indeed a good approximation of the out of sample error.
Since we are attempting to approximate randomly generated Boolean functions \(f\), the average value of \(E_{\mathrm{out}}\) is going to be \(0.5\). This is so because a randomly generated Boolean function can evaluate to \(0\) or \(1\) with equal probability at every input value. Therefore, the given experiment is not going to yield good approximations of \(f\). This is fine as we are concerned with observing the connection between the in-sample and out-of-sample errors as the sample size \(N\) increases.
The results of the initial run of the experiment can are given in Figure 1. In the cases where the sample \(N\) size is below \(d_{\mathrm{vc}}=4\), it can be seen that \(E_{\mathrm{out}}-E_{\mathrm{in}}\) is around \(0.5\) for a vast number of cases, this is due to the fact that for small sample sizes, it is possible to find a large number of hypotheses with near-zero \(E_{\mathrm{in}}\), but many of these hypotheses will invariably be poor approximations and therefore the in-sample error is a very poor generalization for the out-of-sample error.
This situation changes as we reach \(N=4\), the (conjectured) VC dimension for this problem. There are now some situations where \(E_{\mathrm{out}}-E_{\mathrm{in}}<0.5\). In these cases, \(E_{\mathrm{in}}\) is a relatively better generalization of \(E_{\mathrm{out}}\). This situation improves further as one moves beyond the VC dimension in \(N=5\).
The result of the experiment for the larger values of \(N\) are given in Figure 2, it can now be seen that lower values of \(E_{\mathrm{out}}-E_{\mathrm{in}}\) are occurring with greater frequency. This enables us to establish confidence intervals for the difference between the two errors. This implies that we are in the regime of probably approximately correct (PAC) learning.
Figure 1: Histograms showing the distribution of \(E_{\mathrm{out}}(g)-E_{\mathrm{in}}(g)\) in the neighbourhood of \(d_{\mathrm{vc}}\).
Therefore, one can state the probability for the accuracy of the estimate of the out-of-sample error with respect to \(E_{\text{in}}(g)\) for the functions belonging to \(\mathcal{H}_{10}\). This serves as an elementary illustration that learning becomes feasible as the size of the sample \(N\), increases beyond the VC dimension.
It should be noted that increasing the sample size after a point does not increase the overall accuracy of the approximation. This can be seen by reading off the values of the average in-sample error from Table 1 and observing the corresponding plot from Figure 1 or Figure 2.
## 6 Conclusion
Logical Analysis of Data (LAD) as proposed by Peter L. Hammer demonstrates significantly accurate results by fitting Boolean functions to the training set. However, we have not found any research on incorporating LAD into the PAC learning framework. We initiate such an effort in this article. We believe that research in this direction will help us in characterizing cases when LAD can be used as feasible learning algorithm. The methods presented here may also let us construct provably unlearnable Boolean functions.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|} \hline Sample Size (\(N\)) & 2 & 3 & 4 & 5 & 10 & 20 & 40 & 60 \\ \hline Avg. in-sample error (\(E_{\text{in}}\)) & 0.0072 & 0.0216 & 0.0425 & 0.0662 & 0.1853 & 0.2888 & 0.3478 & 0.3752 \\ \hline \end{tabular}
\end{table}
Table 1: Values of average in sample errors for different sample sizes
Figure 2: Histograms showing the distribution of \(E_{\text{out}}(g)-E_{\text{in}}(g)\) for larger values of \(N\). |
2309.16156 | On Steinerberger Curvature and Graph Distance Matrices | Steinerberger proposed a notion of curvature on graphs (J. Graph Theory,
2023). We show that nonnegative curvature is almost preserved under three graph
operations. We characterize the distance matrix and its null space after adding
an edge between two graphs. Let $D$ be the graph distance matrix and
$\mathbf{1}$ be the all-one vector. We provide a way to construct graphs so
that the linear system $Dx = \mathbf{1}$ does not have a solution. Let $\eta$
be the Perron eigenvector of $D.$ We provide a lower bound to
$\langle\eta,\mathbf{1}\rangle$ when the graph is a tree. | Wei-Chia Chen, Mao-Pei Tsui | 2023-09-28T04:06:57Z | http://arxiv.org/abs/2309.16156v2 | # On Steinerberger curvature and graph distance matrices
###### Abstract.
Steinerberger proposed a notion of curvature on graphs (J. Graph Theory, 2023). We show that nonnegative curvature is almost preserved under three graph operations. We characterize the distance matrix and its null space after adding an edge between two graphs. Let \(D\) be a graph distance matrix and \(\mathbf{1}\) be the all-one vector. We provide a way to construct graphs so that the linear system \(Dx=\mathbf{1}\) does not have a solution. Let \(\eta\) be the Perron eigenvector of \(D.\) We provide a lower bound to \(\langle\eta,\mathbf{1}\rangle\) when the graph is a tree.
Key words and phrases:Graph, Curvature, Distance Matrix, Perron-Frobenius 2020 Mathematics Subject Classification: 05C12, 05C50 W.-C. Chen and M.-P. Tsui are supported by NSTC grant 109-2115-M-002-006.
4. We show that if two graphs have the property that \(Dx=\mathbf{1}\) has no solution, then after merging them at a vertex, the new graph has the same property.
5. We provide a lower bound to \(\langle\eta,\mathbf{1}\rangle\) involving the number of leaves when the graph is a tree.
### Definition
Let \(G=(V,E)\) be a finite, connected graph. The curvature proposed by Steinerberger in [13] is a measure \(\mu:V\to\mathbb{R}^{n}\) so that for every vertex \(u\in V,\) we have
\[\sum_{v\in V}d(u,v)\mu(v)=|V|,\]
where \(d(u,v)\) is the length of a shortest path from \(u\) to \(v.\) Equivalently, if the vertices are \(V=\{v_{i}:1\leq i\leq n\},\) by considering the vector \((\mu(v_{1}),...,\mu(v_{n})),\) the curvature of a graph is a vector \(w\) satisfying
\[Dw=n\cdot\mathbf{1},\]
where \(D_{ij}=d(v_{i},v_{j})\) is the distance matrix of the graph.
## 2. Main Results
### Invariance of Total Curvature and Bonnet-Myers Sharpness
The following property of the curvature was proved by Steinerberger as a consequence of von Neumann's Minimax theorem. Inspired by his remarks, we simplify the proof by using linear algebra.
**Theorem 1** ([13]).: _Let \(G\) be a connected graph. Suppose there are \(w_{1},w_{2}\in\mathbb{R}^{n}_{\geq 0}\) so that \(Dw_{1}=Dw_{2}=n\cdot\mathbf{1}.\) Then \(||w_{1}||_{1}=||w_{2}||_{1}.\)_
Proof.: Since \(Dw_{1}=n\cdot\mathbf{1},\) we have \(\mathbf{1}\in\operatorname{Im}(D)=(\operatorname{null}D^{t})^{\perp}=( \operatorname{null}D)^{\perp}.\) Since \(D(w_{1}-w_{2})=\mathbf{0},\) we have \(\langle w_{1}-w_{2},\mathbf{1}\rangle=\mathbf{0}.\) Therefore, we get
\[||w_{1}||_{1}=\langle w_{1},\mathbf{1}\rangle=\langle w_{2},\mathbf{1}\rangle =||w_{2}||_{1}.\]
From the proof above, if we relax the assumption to \(w_{1},w_{2}\in\mathbb{R}^{n},\) we still have \(\langle w_{1},\mathbf{1}\rangle=\langle w_{2},\mathbf{1}\rangle.\)
The discrete Bonnet-Myers theorem in [13] states that if \(G\) has a nonnegative curvature \(w\) so that \(Dw=n\cdot\mathbf{1}\) with \(K=\min_{i}w_{i}\geq 0,\) then
\[\operatorname{diam}G\leq\frac{2n}{||w||_{l^{1}}}\leq\frac{2}{K}.\]
In addition, if \(\operatorname{diam}G\cdot K=2,\) then \(G\) has a constant curvature. Inspired by [3], we find that the Bonnet-Myers sharpness will be preserved under the Cartesian product.
**Proposition**.: _Let \(G_{1},G_{2}\) be connected graphs with curvatures bounded below by \(K_{1},K_{2}\geq 0,\) respectively. Suppose \(G_{1},G_{2}\) are discrete Bonnet-Myers sharp, i.e., \(\operatorname{diam}(G_{1})\cdot K_{1}=\operatorname{diam}(G_{2})\cdot K_{2}=2.\) Then the Cartesian product graph \(G_{1}\square G_{2}\) is discrete Bonnet-Myers sharp._
Proof.: The discrete Bonnet-Myers theorem above implies that \(G_{1}\) and \(G_{2}\) have constant curvature \(K_{1},K_{2}>0.\) By [13, Proposition 2], \(G_{1}\square G_{2}\) has constant curvature \(K>0\) and
\[K=(\frac{1}{K_{1}}+\frac{1}{K_{2}})^{-1}=\frac{K_{1}K_{2}}{K_{1}+K_{2}}.\]
Since \(\operatorname{diam}(G_{1}\square G_{2})=\operatorname{diam}G_{1}+\operatorname{diam}G _{2}\), we have
\[\operatorname{diam}(G_{1}\square G_{2})\cdot K=2.\]
### Bridging, Merging, and Cutting Graphs
Nonnegative curvature will be preserved except for at most two vertices under three basic graph operations. Let \(G_{1}\) and \(G_{2}\) be two graphs whose distance matrices are \(D_{1}\) and \(D_{2}\), respectively. Assume that \(G_{i}\) has \(n_{i}\) vertices for \(i=1,2.\) We can create a larger graph \(G\) by adding an edge \(e\) between them. We can also obtain a graph \(H\) by performing an edge contraction on \(e\) in \(G\). We say that \(H\) is obtained by _merging \(G_{1}\) and \(G_{2}\) at a vertex_.
**Theorem 2** (Bridging Graphs).: _Suppose \(G_{1}\) and \(G_{2}\) have nonnegative curvature, namely, \(D_{i}w_{i}=n_{i}\cdot\mathbf{1}\) holds for some \(w_{i}\in\mathbb{R}_{\geq 0}^{n_{i}}\) for \(i=1,2\). Then the graph \(G\) obtained by adding an edge \(e=\{u,v\}\) between \(G_{1}\) and \(G_{2}\) has a curvature nonnegative everywhere except for the two vertices \(u\) and \(v\)._
As we will show, if \(w_{1},w_{2}\) are curvatures of \(G_{1}\) and \(G_{2}\), respectively, then the curvature of \(G\) at \(u\) and at \(v\) are
\[\frac{||w_{2}||_{1}(n_{1}+n_{2})}{n_{1}||w_{2}||_{1}+n_{2}||w_{1}||_{1}+\frac{ 1}{2}||w_{1}||_{1}||w_{2}||_{1}}\cdot((w_{1})_{n_{1}}-\frac{1}{2}||w_{1}||_{1}) \tag{2.1}\]
and
\[\frac{||w_{1}||_{1}(n_{1}+n_{2})}{n_{1}||w_{2}||_{1}+n_{2}||w_{1}||_{1}+\frac{ 1}{2}||w_{1}||_{1}||w_{2}||_{1}}\cdot((w_{2})_{1}-\frac{1}{2}||w_{2}||_{1}), \tag{2.2}\]
respectively. The curvature of \(G\) at the two vertices \(u\) and \(v\) can be negative. For example, consider adding an edge between two cycles \(C_{3}\). The new graph has a unique curvature
\[w=(\frac{12}{11},\frac{12}{11},\frac{-6}{11},\frac{-6}{11},\frac{12}{11}, \frac{12}{11}).\]
It remains unclear to the authors whether the curvature of \(G\) at \(u\) and at \(v\) are always nonpositive. Is there a graph with nonnegative curvature \(w\), i.e., \(Dw=n\cdot\mathbf{1}\), such that \(w_{i}>\frac{1}{2}||w||_{1}\) for some \(i\)? More generally, is it true that if a graph with a bridge admits a curvature, then the curvature at the vertices of the bridge are always nonpositive?
Figure 1. Adding an edge between the complete graph \(K_{4}\) and the cycle \(C_{6}\).
**Corollary 1**.: _Assume that \(G^{\prime}\) has constant curvature \(K>0\) and \(n=|V(G^{\prime})|.\) Let \(G\) be the graph obtained by adding an edge between two copies of \(G^{\prime}\). The curvature of \(G\) has value \(\frac{(2-n)2K}{4+K}<0\) at the vertices belonging to the edge and \(\frac{4K}{4+K}>0\) at all the other vertices._
The nonnegativeness of curvature will be preserved except for one vertex when we merge two graphs at this vertex.
**Theorem 3** (Merging Graphs).: _Suppose \(G_{1}\) and \(G_{2}\) have nonnegative curvature so that \(D_{i}w_{i}=n_{i}\cdot\mathbf{1}\) for some \(w_{i}\in\mathbb{R}_{\geq 0}^{n_{i}}\). Then the graph \(H\) obtained by adding an edge between \(G_{1}\) and \(G_{2}\) and performing an edge contraction on this edge has a curvature nonnegative everywhere except for the vertex of the contracted edge._
The proofs of the above theorems are inspired by Bapat's work [1]. By using induction and decomposing the distance matrix into smaller ones, Bapat proves that for a tree, the distance matrix satisfies the equation
\[D\tau=(n-1)\cdot\mathbf{1}.\]
where \(\tau=\mathbf{2}-(\deg(v_{1}),...\deg(v_{n}))^{t}\). We will relate the distance matrix of the larger graphs \(G\) and \(H\) to the distance matrices of \(G_{1}\) and \(G_{2}\) in our proof.
The following theorem states that nonnegative curvature will be preserved when we remove a bridge from a graph.
**Theorem 4** (Cutting Graphs).: _Suppose \(G\) is a connected graph containing a bridge \(e\). Let \(G_{1}\) and \(G_{2}\) be the components after removing \(e\) from \(G\). If \(G\) has a nonnegative curvature then \(G_{1}\) and \(G_{2}\) have a nonnegative curvature. If \(G\) has a constant curvature then \(G_{i}\) has a constant curvature except at the vertices belonging to \(e\)._
Figure 3. Merging the complete graph \(K_{4}\) and the cycle \(C_{6}\) at a vertex.
Figure 2. Adding an edge between two copies of \(K_{5}\).
### Null Space of Graph Distance Matrix
In the previous section, we created a new graph \(G\) by adding an edge between two graphs \(G_{1}\) and \(G_{2}\). In this section, we give a characterization of the null space of the distance matrix of \(G\).
**Theorem 5**.: _Let \(G_{1},G_{2}\) be two connected graphs with \(n_{1}\) and \(n_{2}\) vertices, respectively. Let \(G\) be the graph obtained by adding an edge between \(G_{1}\) and \(G_{2}\). Suppose \(D_{G},D_{1},D_{2}\) are the distance matrices of \(G,G_{1},\) and \(G_{2}\), respectively. Then we have_
\[\operatorname{null}D_{G}=\operatorname{null}D_{1}\oplus\operatorname{null}D_ {2}\]
_and_
\[\dim\operatorname{null}D_{G}=\dim\operatorname{null}D_{1}\oplus\dim \operatorname{null}D_{2},\]
_where we canonically embed \(\operatorname{null}D_{i}\) to \(\mathbb{R}^{n_{1}+n_{2}}\) by augmenting zeros._
This implies that
\[\operatorname{rank}D_{G}=\operatorname{rank}D_{1}+\operatorname{rank}D_{2}.\]
### Nonexistence of Curvature
A necessary condition for the curvature to have desirable geometric properties is that the linear system \(Dx=\mathbf{1}\) has a solution. Steinerberger raised the following problem.
**Problem [14].** It seems that for most graphs, the linear system
\(Dx=\mathbf{1}\) tends to have a solution. Why is that?
He gave a sufficient condition for \(Dx=\mathbf{1}\) to have a solution.
**Proposition 1** ([14]).: _Suppose \(D\in\mathbb{R}_{\geq 0}^{n\times n}\) has eigenvalues \(\lambda_{1}>0\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) and eigenvector \(D\eta=\lambda_{1}\eta.\) If_
\[1-\langle\eta,\frac{1}{\sqrt{n}}\rangle^{2}<\frac{|\lambda_{2}|}{\lambda_{1}- \lambda_{2}}, \tag{2.3}\]
_then the linear system \(Dx=\mathbf{1}\) has a solution._
The proof is correct. However, this condition degenerates to the trivial condition of whether \(D\) is invertible or not. Since if \(\lambda_{1}>0>\lambda_{2}\geq\cdots\geq\lambda_{n}\), then \(D\) is invertible. This implies that \(Dx=\mathbf{1}\) has a solution. If \(\lambda_{1}>0=\lambda_{2}\geq\cdots\geq\lambda_{n}\) then the right-hand side of inequality 2.3 is 0. The Cauchy-Schwarz inequality then implies that condition 2.3 will never satisfy.
By merging two graphs at a vertex, we can create graphs so that \(Dx=\mathbf{1}\) does not have a solution.
**Theorem 6**.: _Let \(G_{1}\) and \(G_{2}\) be two connected graphs so that \(D_{i}x=\mathbf{1}\) does not have a solution. Let \(H\) be obtained by adding an edge between \(G_{1}\) and \(G_{2}\), then performing an edge contraction on this edge. If \(D_{H}\) is the distance matrix of \(H\) then_
\[D_{H}x=\mathbf{1}\]
_does not have a solution._
We use Matlab to generate 10000 Erdos-Renyi random graphs \(G(n,p)\), with parameters \(n=50\) and \(p=1/2.\) We find that for each graph we generated, both the adjacency matrix and the distance matrix have full rank. Let \(Q_{n}\) be the adjacency matrix of a random graph, where self-loops are allowed. In other words, the upper triangular entries and the diagonal entries of \(Q_{n}\) are independent Bernoulli random variables. In their work [2], Costello, Tao and Vu showed that \(Q_{n}\) is invertible with probability 1 as \(n\to\infty\). It was shown in [8] that with probability 1, the distance matrix of \(G(n,p)\) is invertible as \(n\to\infty\).
### Perron Eigenvector of Distance Matrix
In his work [14], Steinerberger proves that if \(\eta\) is the Perron eigenvector of the distance matrix of a graph (the first eigenvector whose entries are nonnegative), then \(\langle\eta,\mathbf{1}\rangle^{2}\geq\frac{n}{2},\) where \(n\) is the number of vertices. We provide a lower bound when the graph is a tree involving the number of leaves.
**Proposition 2**.: _Let \(T\) be a tree with \(n\) vertices and \(l\) leaves. Let \(D\) be its distance matrix, \(\lambda\) be its largest positive eigenvalue (Perron root), and \(\eta\) be the Perron eigenvector of \(D\) with \(||\eta||_{2}=1.\) Then_
\[\langle\eta,\mathbf{1}\rangle^{2}>\frac{n}{2}(\frac{\lambda}{\lambda-l+1})+ \frac{n-l-1}{\lambda-l+2}.\]
**Example**. The star graph with \(n\) vertices has \(l=n-1\) leaves. The eigenvalue estimate of the Perron root (see for example, [10, Theorem 8.1.22] and [16, Corollary 7]) gives
\[\frac{2(n-1)^{2}}{n}=\frac{\sum_{i,j}D_{ij}}{n}\leq\lambda\leq\max_{i}\sum_{j= 1}^{n}D_{ij}=2n-3.\]
Then the proposition above gives
\[\langle\eta,\mathbf{1}\rangle^{2}>\frac{(n-1)^{2}}{n-1}=n-1.\]
## 3. Proofs
### Proof of Theorem 2
Proof.: Let \(V(G_{1})=\{u_{i}:1\leq i\leq n_{1}\}\) and \(V(G_{2})=\{v_{j}:1\leq j\leq n_{2}\}.\) The main observation is that if \(u_{i}\in V(G_{1})\) and \(v_{j}\in V(G_{2}),\) then the shortest path from \(u_{i}\) to \(v_{j}\) has to pass through the edge \(\{u,v\}.\) Relabel the vertices so that \(u=u_{n_{1}}\) is the last vertex of \(G_{1}\) and \(v=v_{1}\) is the first vertex of \(G_{2}.\) The observation implies
\[d_{G}(u_{i},v_{j})=d_{G_{1}}(u_{i},u_{n_{1}})+1+d_{G_{2}}(v_{1},v_{j})\]
for \(1\leq i\leq n_{1},1\leq j\leq n_{2}.\) Let \(y\) be the last column of \(D_{1}\) and \(z\) be the first column of \(D_{2}.\) In other words,
\[y_{i} =d_{G_{1}}(u_{i},u_{n_{1}})\] \[z_{j} =d_{G_{2}}(v_{1},v_{j}).\]
If \(D_{G}\) is the distance matrix of \(G,\) we can write
\[D_{G}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}\mathbf{1}^{t}+\mathbf{ 1}z^{t}\\ \mathbf{1}y^{t}+\mathbf{1}\mathbf{1}^{t}+z\mathbf{1}^{t}&D_{2}\end{bmatrix}.\]
Let \(\alpha,s\in\mathbb{R}\) be chosen later. Let \(e_{n_{1}},e_{n_{1}+1}\in\mathbb{R}^{n_{1}+n_{2}}\) be the \(n_{1}\)-th and the \((n_{1}+1)\)-th standard coordinate vectors, respectively. Define
\[w=\begin{bmatrix}\alpha w_{1}\\ w_{2}\end{bmatrix}+se_{n_{1}}+se_{n_{1}+1}. \tag{3.4}\]
Then
\[D_{G}w=\begin{bmatrix}\alpha n_{1}\mathbf{1}+y\mathbf{1}^{t}w_{2}+\mathbf{1 }\mathbf{1}^{t}w_{2}+\mathbf{1}z^{t}w_{2}\\ \alpha\mathbf{1}y^{t}w_{1}+\alpha\mathbf{1}\mathbf{1}^{t}w_{1}+\alpha z \mathbf{1}^{t}w_{1}+n_{2}\mathbf{1}\end{bmatrix}+\begin{bmatrix}sy\\ s(z+\mathbf{1})\end{bmatrix}+\begin{bmatrix}s(y+\mathbf{1})\\ sz\end{bmatrix}\]
since \(z_{1}=y_{n_{1}}=0.\) By looking at the \(n_{1}\)-th row and the first row of \(D_{i}w_{i}=n_{i}\cdot\mathbf{1},\) we have \(y^{t}w_{1}=n_{1}\) and \(z^{t}w_{2}=n_{2}.\) Therefore,
\[D_{G}w=\left[\begin{matrix}(\alpha n_{1}+\mathbf{1}^{t}w_{2}+n_{2}+s)\mathbf{1 }+(2s+\mathbf{1}^{t}w_{2})y\\ (\alpha n_{1}+n_{2}+\alpha\mathbf{1}^{t}w_{1}+s)\mathbf{1}+(2s+\alpha\mathbf{ 1}^{t}w_{1})z\end{matrix}\right].\]
Define
\[s=\frac{-\mathbf{1}^{t}w_{2}}{2},\alpha=\frac{\mathbf{1}^{t}w_{2}}{\mathbf{1} ^{t}w_{1}}>0.\]
Note that since \(\mathbf{1}^{t}w_{1}>0,\) the number \(\alpha\) is well-defined. Then \(2s=-\mathbf{1}^{t}w_{2}=-\alpha\mathbf{1}^{t}w_{1}.\) Thus, we get
\[D_{G}w=(\alpha n_{1}+\mathbf{1}^{t}w_{2}+n_{2}+s)\mathbf{1}=(\alpha n_{1}+n_{ 2}+\frac{\mathbf{1}^{t}w_{2}}{2})\mathbf{1}.\]
This implies \(G\) admits a curvature after scaling. We have
\[\alpha n_{1}+n_{2}+\frac{\mathbf{1}^{t}w_{2}}{2}>0.\]
Therefore, \(G\) admits a curvature nonnegative everywhere except at the vertices \(u_{n_{1}}\) and \(v_{1}.\)
Equations 2.1 and 2.2 follow from our construction of \(w\). The corollary can be proved by plugging in \(\alpha=1\) and \(s=nK\) in equation 3.4.
### Proof of Theorem 3
The idea is the same as the proof of Theorem 2. However, the analysis needs to be more careful.
Proof.: Write \(V(G_{1})=\{u_{1},...,u_{n_{1}}\}\) and \(V(G_{2})=\{v_{1},...,v_{n_{2}}\}\) so that the edge added and then contracted is \(\{u_{n_{1}},v_{1}\}.\) Thus, \(u_{n_{1}}\) and \(v_{1}\) will be the identical vertex in \(H.\) Let \(y\in\mathbb{R}^{n_{1}}\) be the last column of \(D_{1}\) and \(z\in\mathbb{R}^{n_{2}-1}\) be the first column of \(D_{2}\) without the first entry. Namely,
\[y=\left[\begin{matrix}d_{G_{1}}(u_{1},u_{n_{1}})\\ \vdots\\ d_{G_{1}}(u_{n_{1}},u_{n_{1}})\end{matrix}\right],z=\left[\begin{matrix}d_{G_{ 2}}(v_{1},v_{2})\\ \vdots\\ d_{G_{2}}(v_{1},v_{n_{2}})\end{matrix}\right].\]
Let \(w\) and \(g\) be nonnegative vectors satisfying \(D_{1}w=n_{1}\cdot\mathbf{1}\) and \(D_{2}g=n_{2}\cdot\mathbf{1}.\) Let \(\bar{g}=(g_{2},...,g_{n_{2}}),\) and \(\bar{D_{2}}\) be the matrix obtained by removing the first column and the first row of \(D_{2}.\) Thus,
\[D_{2}=\left[\begin{matrix}0&z^{t}\\ z&\bar{D_{2}}\end{matrix}\right].\]
The equation \(D_{2}g=n_{2}\cdot\mathbf{1}\) gives \(z^{t}\bar{g}=n_{2}\) and \(\bar{D_{2}}\bar{g}=n_{2}\mathbf{1}-g_{1}z\). Similar to the proof of Theorem 2, the shortest path in \(H\) between \(u_{i}\) and \(v_{j}\) has to pass through the common vertex \(u_{n_{1}}=v_{1}.\) We thus have
\[d_{H}(u_{i},v_{j})=d_{G_{1}}(u_{i},u_{n_{1}})+d_{G_{2}}(v_{1},v_{j})=y_{i}+1+z_ {j-1}\]
for \(1\leq i\leq n_{1}\) and \(2\leq j\leq n_{2}.\) In addition,
\[d_{H}(u_{i},u_{j}) =d_{G_{1}}(u_{i},u_{j})\] \[d_{H}(v_{i},v_{j}) =d_{G_{2}}(v_{i},v_{j})\]
hold for all \(i,j.\) Therefore, we can write the distance matrix of \(H\) as
\[D_{H}=\left[\begin{matrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}z^{t}\\ \mathbf{1}y^{t}+z\mathbf{1}^{t}&\bar{D_{2}}\end{matrix}\right]\in\mathbb{R}^{(n _{1}+n_{2}-1)\times(n_{1}+n_{2}-1)}.\]
Let \(\alpha,s\in\mathbb{R}\) be chosen later. Define the potential candidate of the curvature
\[w^{\prime}=\begin{bmatrix}\alpha w\\ \mathbf{0}_{n_{2}-1}\end{bmatrix}+\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \bar{g}\end{bmatrix}+(s+g_{1})e_{n_{1}}\in\mathbb{R}^{n_{1}+n_{2}-1},\]
where \(e_{n_{1}}\in\mathbb{R}^{n_{1}+n_{2}-1}\) is the \(n_{1}\)-th standard coordinate vector. Then
\[D_{H}w^{\prime} =\begin{bmatrix}\alpha n_{1}\mathbf{1}\\ \alpha\mathbf{1}y^{t}w+\alpha z\mathbf{1}^{t}w\end{bmatrix}+\begin{bmatrix} y\mathbf{1}^{t}\bar{g}+\mathbf{1}z^{t}\bar{g}\\ n_{2}\mathbf{1}-g_{1}z\end{bmatrix}+(s+g_{1})\begin{bmatrix}y\\ z\end{bmatrix}\] \[=\begin{bmatrix}(\alpha n_{1}+z^{t}\bar{g})\mathbf{1}\\ (\alpha y^{t}w+n_{2})\mathbf{1}\end{bmatrix}+\begin{bmatrix}(\mathbf{1}^{t} \bar{g}+s+g_{1})y\\ (\alpha\mathbf{1}^{t}w+s)z\end{bmatrix}.\]
Note that \(z^{t}\bar{g}=n_{2}\) and \(y^{t}w=n_{1}.\) Set
\[s =-g_{1}-\mathbf{1}^{t}\bar{g}=-\mathbf{1}^{t}g,\] \[\alpha =\frac{-s}{\mathbf{1}^{t}w}=\frac{\mathbf{1}^{t}g}{\mathbf{1}^{t }w}.\]
The fact that \(w,g\) are nonnegative curvature of \(G_{1}\) and \(G_{2},\) respectively, implies \(\mathbf{1}^{t}w>0\) and \(\mathbf{1}^{t}g>0\). Thus, \(\alpha>0\) is well-defined. We then have
\[D_{H}w^{\prime}=(\alpha n_{1}+n_{2})\mathbf{1}.\]
Thus, we have
\[D_{H}(\frac{n_{1}+n_{2}-1}{\alpha n_{1}+n_{2}}w^{\prime})=(n_{1}+n_{2}-1)\cdot \mathbf{1}.\]
This implies \(H\) admits a curvature nonnegative everywhere except at the common vertex \(u_{n_{1}}=v_{1}\).
From our construction, the curvature of \(H\) at the common vertex \(u_{n_{1}}=v_{1}\) is
\[\frac{||g||_{1}(w)_{n_{1}}-||w||_{1}||\bar{g}||_{1}}{||g||_{1}n_{1}+||w||_{1}n _{2}}\cdot(n_{1}+n_{2}-1).\]
### Proof of Theorem 4
Proof.: Let \(D_{i}\) be the distance matrices of \(G_{i}\) for \(i=1,2.\) Write \(V(G_{1})=\{u_{1},...,u_{n_{1}}\}\) and \(V(G_{2})=\{v_{1},...v_{n_{2}}\}\) so that the bridge is \(e=\{u_{n_{1}},v_{1}\}.\) Since \(G\) has a nonnegative curvature, we have
\[D_{G}w=(n_{1}+n_{2})\cdot\mathbf{1}\]
for some \(w\in\mathbb{R}_{\geq 0}^{n_{1}+n_{2}}.\) Write
\[w=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix},\]
where \(w_{i}\in\mathbb{R}_{\geq 0}^{n_{i}}.\) Let \(y\) be the last column of \(D_{1}\) and \(z\) be the first column of \(D_{2},\) as in the proof of Theorem 2. Then
\[D_{G}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}\mathbf{1}^{t}+\mathbf{1} z^{t}\\ \mathbf{1}y^{t}+\mathbf{1}\mathbf{1}^{t}+z\mathbf{1}^{t}&D_{2}\end{bmatrix}.\]
Since \(D_{G}w=(n_{1}+n_{2})\cdot\mathbf{1},\) we get
\[D_{1}w_{1}+y\mathbf{1}^{t}w_{2}+\mathbf{1}\mathbf{1}^{t}w_{2}+ \mathbf{1}z^{t}w_{2} =(n_{1}+n_{2})\cdot\mathbf{1} \tag{3.5}\] \[\mathbf{1}y^{t}w_{1}+\mathbf{1}\mathbf{1}^{t}w_{1}+z\mathbf{1}^{t }w_{1}+D_{2}w_{2} =(n_{1}+n_{2})\cdot\mathbf{1}. \tag{3.6}\]
The last row of equation 3.5, the first row of equation 3.6, together with \(y_{n_{1}}=z_{1}=0\) give
\[y^{t}w_{1}+(z+\mathbf{1})^{t}w_{2} =n_{1}+n_{2} \tag{3.7}\] \[(\mathbf{1}+y)^{t}w_{1}+z^{t}w_{2} =n_{1}+n_{2}. \tag{3.8}\]
Define
\[\bar{w}_{1} =w_{1}+(\mathbf{1}^{t}w_{2})e_{n_{1}}\] \[\bar{w}_{2} =w_{2}+(\mathbf{1}^{t}w_{1})e_{1},\]
where \(e_{n_{1}},e_{1}\) are the \(n_{1}\)-th and the first coordinate vectors in \(\mathbb{R}^{n_{1}}\) and \(\mathbb{R}^{n_{2}}\), respectively. Then
\[D_{1}\bar{w}_{1} =D_{1}w_{1}+\mathbf{1}^{t}w_{2}y=(n_{1}+n_{2}-\mathbf{1}^{t}w_{2}- z^{t}w_{2})\mathbf{1}=(y^{t}w_{1})\mathbf{1}\] \[D_{2}\bar{w}_{2} =D_{2}w_{2}+\mathbf{1}^{t}w_{1}z=(n_{1}+n_{2}-y^{t}w_{1}-\mathbf{ 1}^{t}w_{1})\mathbf{1}=(z^{t}w_{2})\mathbf{1},\]
by equations 3.5 to 3.8.
We claim that \(y^{t}w_{1},z^{t}w_{2}>0.\) Suppose \(y^{t}w_{1}=0.\) Since \(w=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix}\) satisfies
\[D_{G}w=(n_{1}+n_{2})\cdot\mathbf{1}\]
and \(D_{G}\) is a nonnegative matrix, we have
\[\mathbf{1}^{t}w=\mathbf{1}^{t}w_{1}+\mathbf{1}^{t}w_{2}>0.\]
Note that equations 3.7 and 3.8 implies \(\mathbf{1}^{t}w_{1}=\mathbf{1}^{t}w_{2}.\) Therefore, \(\mathbf{1}^{t}w_{1}=\mathbf{1}^{t}w_{2}>0.\) Since \(w_{1}\in\mathbb{R}_{\geq 0}^{n_{1}}\) and \(y_{n_{1}}=0,\) we have
\[0<\mathbf{1}^{t}w_{1}\leq y^{t}w_{1}+(w_{1})_{n_{1}}=(w_{1})_{n_{1}}.\]
This implies \(w_{1}=ce_{n_{1}}\) for \(c=(w_{1})_{n_{1}}>0.\) Plugging this into equation 3.5, we get
\[2cy=(n_{1}+n_{2}-\mathbf{1}^{t}w_{2}-z^{t}w_{2})\cdot\mathbf{1}=(y^{t}w_{1}) \cdot\mathbf{1}=\mathbf{0},\]
by equation 3.7. This implies \(c=0\) and \(w_{1}=\mathbf{0},\) contradicts to \(\mathbf{1}^{t}w_{1}>0.\) A similar argument shows that \(z^{t}w_{2}>0.\)
Consider
\[w_{1}^{\prime} =\frac{n_{1}}{y^{t}w_{1}}\bar{w}_{1}\] \[w_{2}^{\prime} =\frac{n_{2}}{z^{t}w_{2}}\bar{w}_{2}.\]
Then \(D_{i}w_{i}^{\prime}=n_{i}\cdot\mathbf{1}\) for \(i=1,2.\) Thus, \(G_{i}\) has a nonnegative curvature for \(i=1,2.\) If both \(w_{1}\) and \(w_{2}\) are constant, then by construction, \(w_{1}^{\prime}\) and \(w_{2}^{\prime}\) are constant everywhere except at vertices \(u_{n_{1}}\) and \(v_{1},\) respectively.
### Proof of Theorem 5
We first need a lemma.
**Lemma 1**.: _Let \(G\) be a graph admitting a nonnegative curvature \(w\in\mathbb{R}_{\geq 0}^{n},\) i.e., \(Dw=n\cdot\mathbf{1}.\) Suppose \(Dg=\mathbf{1}\) for some \(g\in\mathbb{R}^{n}.\) Then \(\mathbf{1}^{t}g>0.\)_
Proof.: If the null space of \(D\) is empty, then \(g=\frac{1}{n}w\in\mathbb{R}_{\geq 0}^{n}.\) Therefore, \(\mathbf{1}^{t}g>0.\) Otherwise, let \(z_{1},...,z_{k}\in\operatorname{null}(D).\) Then we can write
\[g=\frac{w}{n}+c_{1}z_{1}+\cdots+c_{k}z_{k}\]
for some coefficients \(c_{i}\). Thus,
\[\mathbf{1}^{t}g=\mathbf{1}^{t}\frac{w}{n}+c_{1}\mathbf{1}^{t}z_{1}+\cdots+c_{k} \mathbf{1}^{t}z_{k}=\mathbf{1}^{t}\frac{w}{n}>0,\]
where we use the fact that \(\mathbf{1}\in\operatorname{Im}(D)=(\operatorname{null}D)^{\perp}.\)
Proof of Theorem 5.: As in the proof of Theorem 2, we write
\[D_{G}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}\mathbf{1}^{t}+\mathbf{1 }z^{t}\\ \mathbf{1}y^{t}+\mathbf{1}\mathbf{1}^{t}+z\mathbf{1}^{t}&D_{2}\end{bmatrix},\]
where \(y\) is the last column of \(D_{1},\) and \(z\) is the first column of \(D_{2}.\) Since \(G_{1}\) and \(G_{2}\) are nonnegatively curved, \(\mathbf{1}\in\operatorname{Im}D_{i}=(\operatorname{null}D_{i})^{\perp}.\) In addition, by Theorem 2, \(G\) admits a curvature. This implies
\[\mathbf{1}\in\operatorname{Im}D_{G}=(\operatorname{null}D_{G})^{\perp}.\]
If \(\eta\in\operatorname{null}D_{1},\) then \(y^{t}\eta=\mathbf{1}^{t}\eta=0\). This implies \(D_{G}\begin{bmatrix}\eta\\ \mathbf{0}_{n_{2}}\end{bmatrix}=\mathbf{0}.\) Similarly, if \(\xi\in\operatorname{null}D_{2},\) then \(\mathbf{1}^{t}\xi=z^{t}\xi=0\). This implies \(D_{G}\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi\end{bmatrix}=\mathbf{0}.\) Therefore, if \(\{\eta_{1},...,\eta_{k_{1}}\}\) is a basis of \(\operatorname{null}D_{1}\) and \(\{\xi_{1},...,\xi_{k_{2}}\}\) is a basis of \(\operatorname{null}D_{2},\) then
\[\left\{\begin{bmatrix}\eta_{1}\\ \mathbf{0}_{n_{2}}\end{bmatrix},...,\begin{bmatrix}\eta_{k_{1}}\\ \mathbf{0}_{n_{2}}\end{bmatrix},\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{1}\end{bmatrix},...,\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{k_{2}}\end{bmatrix}\right\}\]
is linearly independent in \(\operatorname{null}D_{G}.\) This shows that
\[\dim\operatorname{null}D_{G}\geq k_{1}+k_{2}=\dim\operatorname{null}D_{1}+ \dim\operatorname{null}D_{2}.\]
On the other hand, suppose \(\begin{bmatrix}\eta\\ \xi\end{bmatrix}\in\operatorname{null}D_{G}.\) Our goal is to show that \(\eta\in\operatorname{null}D_{1}\) and \(\xi\in\operatorname{null}D_{2}\). We have
\[\mathbf{0}_{n_{1}} =D_{1}\eta+y\mathbf{1}^{t}\xi+\mathbf{1}\mathbf{1}^{t}\xi+ \mathbf{1}z^{t}\xi \tag{3.9}\] \[\mathbf{0}_{n_{2}} =\mathbf{1}y^{t}\eta+\mathbf{1}\mathbf{1}^{t}\eta+z\mathbf{1}^{t} \eta+D_{2}\xi\] (3.10) \[0 =\mathbf{1}^{t}\begin{bmatrix}\eta\\ \xi\end{bmatrix} \tag{3.11}\]
By looking at the \(n_{1}\)-th row of the first equation and using \(y_{n_{1}}=0\), we get
\[0=y^{t}\eta+\mathbf{1}^{t}\xi+z^{t}\xi.\]
The first row of the second equation and \(z_{1}=0\) gives
\[0=y^{t}\eta+\mathbf{1}^{t}\eta+z^{t}\xi.\]
Combining these with the third equation, we conclude that
\[\mathbf{1}^{t}\eta=\mathbf{1}^{t}\xi=0.\]
Therefore, equations 3.9 and 3.10 give
\[D_{1}\eta =-(z^{t}\xi)\mathbf{1}\] \[D_{2}\xi =-(y^{t}\eta)\mathbf{1}.\]
Suppose that \(z^{t}\xi\neq 0.\) Since \(G_{1}\) admits a nonnegative curvature, by Lemma 1, we have \(0<\mathbf{1}^{t}\frac{\eta}{-z^{t}\xi}=0,\) a contradiction. Thus, \(z^{t}\xi=0\) and \(D_{1}\eta=\mathbf{0}.\) Similarly, we have \(D_{2}\xi=\mathbf{0}\). Therefore, \(\eta\in\operatorname{null}D_{1}\) and \(\xi\in\operatorname{null}D_{2}\).
We can thus write \(\eta=c_{1}\eta_{1}+\cdots+c_{k_{1}}\eta_{k_{1}}\) and \(\xi=d_{1}v_{1}+\cdots+d_{k_{2}}v_{k_{2}}\) where \(c_{i},d_{j}\in\mathbb{R}\). This means that
\[\begin{bmatrix}\eta\\ \xi\end{bmatrix}=c_{1}\begin{bmatrix}\eta_{1}\\ \mathbf{0}_{n_{2}}\end{bmatrix}+\cdots+c_{k_{1}}\begin{bmatrix}\eta_{k_{1}}\\ \mathbf{0}_{n_{2}}\end{bmatrix}+d_{1}\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{1}\end{bmatrix}+\cdots+d_{k_{2}}\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{k_{2}}\end{bmatrix}.\]
Thus, the vectors
\[\begin{bmatrix}\eta_{1}\\ \mathbf{0}_{n_{2}}\end{bmatrix},...,\begin{bmatrix}\eta_{k_{1}}\\ \mathbf{0}_{n_{2}}\end{bmatrix},\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{1}\end{bmatrix},...,\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \xi_{k_{2}}\end{bmatrix}\]
form a basis of \(\operatorname{null}D_{G}.\) This implies
\[\dim\operatorname{null}D_{G}=\dim\operatorname{null}D_{1}+\dim\operatorname{ null}D_{2},\]
as desired.
### Proof of Theorem 6
As in the proof of Theorem 3, we assume \(V(G_{1})=\{u_{1},...,u_{n_{1}}\},V(G_{2})=\{v_{1},...,v_{n_{2}}\}\) and \(\{u_{n_{1}},v_{1}\}\) is the edge added and contracted. The condition that \(D_{1}x=\mathbf{1}\) has no solution is equivalent to \(\mathbf{1}\not\in\operatorname{Im}D_{1}=(\operatorname{null}D_{1})^{\perp}.\) This is equivalent to that there is \(\eta\in\operatorname{null}D_{1}\) with \(\langle\eta,\mathbf{1}\rangle\neq 0.\) Similarly, we can find a vector \(\xi\in\operatorname{null}D_{2}\) with \(\langle\xi,\mathbf{1}\rangle\neq 0.\) Our goal is to find a vector \(\zeta\in\operatorname{null}D_{H}\) with \(\langle\zeta,\mathbf{1}\rangle\neq 0.\)
Consider the vector
\[\zeta=\alpha\begin{bmatrix}\eta\\ \mathbf{0}_{n_{2}-1}\end{bmatrix}+\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \bar{\xi}\end{bmatrix}+(s+\xi_{1})e_{n_{1}},\]
where \(\bar{\xi}=(\xi_{2},...,\xi_{n_{2}}),\)\(e_{n_{1}}\) is the \(n_{1}\)-th coordinate vector in \(\mathbb{R}^{n_{1}+n_{2}-1},\) and \(\alpha,s\in\mathbb{R}\) are to be chosen. As in the proof of Theorem 3, let \(y\in\mathbb{R}^{n_{1}}\) be the last column of \(D_{1}\) and \(z\in\mathbb{R}^{n_{2}-1}\) be the first column of \(D_{2}\) without the first entry. Write
\[D_{H}=\begin{bmatrix}D_{1}&y\mathbf{1}^{t}+\mathbf{1}z^{t}\\ \mathbf{1}y^{t}+z\mathbf{1}^{t}&\bar{D_{2}}\end{bmatrix}\in\mathbb{R}^{(n_{1} +n_{2}-1)\times(n_{1}+n_{2}-1)},\]
where
\[D_{2}=\begin{bmatrix}0&z^{t}\\ z&\bar{D_{2}}\end{bmatrix}.\]
Then \(D_{2}\xi=\mathbf{0}\) implies \(z^{t}\bar{\xi}=0\) and \(\xi_{1}z+\bar{D_{2}}\bar{\xi}=\mathbf{0}_{n_{2}-1}.\) Therefore,
\[D_{H}\zeta=\begin{bmatrix}\mathbf{0}_{n_{1}}\\ \alpha\mathbf{1}y^{t}\eta+\alpha z\mathbf{1}^{t}\eta\end{bmatrix}+\begin{bmatrix} y\mathbf{1}^{t}\bar{\xi}\\ -\xi_{1}z\end{bmatrix}+(s+\xi_{1})\begin{bmatrix}y\\ z\end{bmatrix}.\]
Note that \(D_{1}\eta=\mathbf{0}_{n_{1}}\) gives \(y^{t}\eta=0.\) Thus,
\[D_{H}\zeta=\begin{bmatrix}(\mathbf{1}^{t}\xi+s)y\\ (\mathbf{1}^{t}\eta\alpha+s)z\end{bmatrix}.\]
Set \(s=-\mathbf{1}^{t}\xi\) and \(\alpha=\frac{\mathbf{1}^{t}\xi}{\mathbf{1}^{t}\eta}.\) Note that \(\alpha\) is well-defined since \(\mathbf{1}^{t}\eta\neq 0.\) Then
\[D_{H}\zeta=\mathbf{0}_{n_{1}+n_{2}-1}.\]
In addition, we have
\[\langle\zeta,\mathbf{1}\rangle=\alpha\mathbf{1}^{t}\eta+\mathbf{1}^{t}\xi+s= \mathbf{1}^{t}\xi\neq 0.\]
Therefore,
\[\mathbf{1}\not\in(\operatorname{null}D_{H})^{\perp}=\operatorname{Im}D_{H}.\]
This implies that \(D_{H}x=\mathbf{1}\) does not have a solution.
### Proof of Proposition 2
We follow the idea in the Theorem in [14] with some revision.
Proof.: Let \(V=\{u_{1},...,u_{n}\}\) be the vertices of the tree \(T.\) Let \(L\subset V\) be the leaves of \(T.\) Assume \(u_{k}\) is not a leaf with \(k\) fixed. Then
\[\lambda =\sum_{i,j=1}^{n}d(u_{i},u_{j})\eta_{i}\eta_{j}\] \[=\sum_{i\neq j}d(u_{i},u_{j})\eta_{i}\eta_{j}\] \[\leq\sum_{i\neq j}(d(u_{i},u_{k})+d(u_{k},u_{j}))\eta_{i}\eta_{j}\] \[=\sum_{i,j=1}^{n}(d(u_{i},u_{k})+d(u_{k},u_{j}))\eta_{i}\eta_{j}- \sum_{i=1}^{n}2d(u_{i},u_{k})\eta_{i}^{2}.\]
Therefore,
\[\lambda+2\sum_{i=1}^{n}d(u_{i},u_{k})\eta_{i}^{2}\leq 2\langle\eta,\mathbf{1} \rangle\lambda\eta_{k}.\]
Note that
\[\sum_{i=1}^{n}d(u_{i},u_{k})\eta_{i}^{2}=\sum_{i\neq k}d(u_{i},u_{k})\eta_{i}^ {2}\geq\sum_{i\neq k}\eta_{i}^{2}=||\eta||^{2}-\eta_{k}^{2}=1-\eta_{k}^{2}.\]
Thus, we get
\[\lambda+2-2\eta_{k}^{2}\leq 2\langle\eta,\mathbf{1}\rangle\lambda\eta_{k}.\]
Rearranging the terms and summing \(k\) over all non-leaves, we get
\[\lambda(n-l)\leq 2\langle\eta,\mathbf{1}\rangle\lambda\sum_{k:u_{k}\notin L} \eta_{k}+2\sum_{k:u_{k}\notin L}\eta_{k}^{2}-2(n-l). \tag{3.12}\]
On the other hand, suppose \(u_{k}\in L\) is a leaf with \(k\) fixed. If \(i,j\neq k\) then
\[d(u_{i},u_{j})\leq d(u_{i},u_{k})+d(u_{k},u_{j})-2.\]
To see this, assume that \(u_{k}\) is adjacent to the vertex \(u_{k^{\prime}}.\) Then
\[d(u_{i},u_{k}) =d(u_{i},u_{k^{\prime}})+1\] \[d(u_{j},u_{k}) =d(u_{j},u_{k}^{\prime})+1.\]
Thus,
\[d(u_{i},u_{j})\leq d(u_{i},u_{k^{\prime}})+d(u_{j},u_{k^{\prime}})=d(u_{i},u_ {k})+d(u_{j},u_{k})-2.\]
Then we have
\[\lambda =\sum_{i,j}\eta_{i}\eta_{j}d(u_{i},u_{j})\] \[\leq\sum_{i,j\neq k}\eta_{i}\eta_{j}(d(u_{i},u_{k})+d(u_{k},u_{j} )-2)+2\sum_{i\neq k}\eta_{i}\eta_{k}d(u_{i},u_{k})+\eta_{k}^{2}d(u_{k},u_{k})\] \[=2(\langle\eta,\mathbf{1}\rangle-\eta_{k})\lambda\eta_{k}-2( \langle\eta,\mathbf{1}\rangle-\eta_{k})^{2}+2\lambda\eta_{k}^{2}\] \[=(2\lambda+4)\eta_{k}\langle\eta,\mathbf{1}\rangle-2\langle\eta,\mathbf{1}\rangle^{2}-2\eta_{k}^{2}.\]
By summing \(k\) over all leaves, we get
\[\lambda l\leq(2\lambda+4)\langle\eta,\mathbf{1}\rangle\sum_{k:u_{k}\in L}\eta_{k} -2\langle\eta,\mathbf{1}\rangle^{2}l-2\sum_{k:u_{k}\in L}\eta_{k}^{2}. \tag{3.13}\]
Thus, adding equations 3.12 and 3.13, we get
\[\lambda n\leq(2\lambda-2l)\langle\eta,\mathbf{1}\rangle^{2}+4\langle\eta, \mathbf{1}\rangle\sum_{k:u_{k}\in L}\eta_{k}+2(\sum_{k:u_{k}\not\in L}\eta_{k} ^{2}-\sum_{k:u_{k}\in L}\eta_{k}^{2})-2(n-l).\]
Since
\[\sum_{k:u_{k}\not\in L}\eta_{k}^{2}-\sum_{k:u_{k}\in L}\eta_{k}^{2} <\sum_{k:u_{k}\not\in L}\eta_{k}^{2}+\sum_{k:u_{k}\in L}\eta_{k}^{2}=1\] \[\sum_{k:u_{k}\in L}\eta_{k} <\langle\eta,\mathbf{1}\rangle,\]
we get
\[\lambda n<(2\lambda-2l+4)\langle\eta,\mathbf{1}\rangle^{2}+2-2(n-l).\]
Note that \(l\leq n-1\) since \(T\) is a tree. The eigenvalue estimate [10, Theorem 8.1.22] gives
\[\lambda\geq\min_{i}\sum_{j=1}^{n}D_{ij}\geq n-1.\]
Therefore, \(\lambda-l+2>0.\) Thus,
\[\langle\eta,\mathbf{1}\rangle^{2}>\frac{n}{2}(\frac{\lambda}{\lambda-l+2})+ \frac{n-l-1}{\lambda-l+2}.\]
|
2310.04431 | Can neural networks count digit frequency? | In this research, we aim to compare the performance of different classical
machine learning models and neural networks in identifying the frequency of
occurrence of each digit in a given number. It has various applications in
machine learning and computer vision, e.g. for obtaining the frequency of a
target object in a visual scene. We considered this problem as a hybrid of
classification and regression tasks. We carefully create our own datasets to
observe systematic differences between different methods. We evaluate each of
the methods using different metrics across multiple datasets.The metrics of
performance used were the root mean squared error and mean absolute error for
regression evaluation, and accuracy for classification performance evaluation.
We observe that decision trees and random forests overfit to the dataset, due
to their inherent bias, and are not able to generalize well. We also observe
that the neural networks significantly outperform the classical machine
learning models in terms of both the regression and classification metrics for
both the 6-digit and 10-digit number datasets. Dataset and code are available
on github. | Padmaksh Khandelwal | 2023-09-25T03:45:36Z | http://arxiv.org/abs/2310.04431v1 | ## Can Neural Networks Count Digit Frequency?
### Abstract
In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vision, e.g. for obtaining the frequency of a target object in a visual scene. We considered this problem as a hybrid of classification and regression tasks. We carefully create our own datasets to observe systematic differences between different methods. We evaluate each of the methods using different metrics across multiple datasets. The metrics of performance used were the root mean squared error and mean absolute error for regression evaluation, and accuracy for classification performance evaluation. We observe that decision trees and random forests overfit to the dataset, due to their inherent bias, and are not able to generalize well. We also observe that the neural networks significantly outperform the classical machine learning models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets. Dataset and code are available on github.
### Introduction
Some of the fundamental aspects of deep learning were introduced quite early, e.g. backpropagation[1] and deep convolutional neural networks[2], however, it required an increase in computational power and access to large datasets[3, 4, 5] to get mainstream. Recently, these learning techniques have been shown to be successful in different tasks like playing the game of Go[6] and even the task of question-answering interactions, e.g. instructGPT[7] which led to recently popular ChatGPT.
In this paper, we show that it is still not easy to use the recent machine learning models for a simple but important task of counting the frequency of different digits in a given sequence of numbers, e.g. Figure 1 shows that even ChatGPT is not good at this task. This task has several downstream applications, e.g. counting the number of objects detected in a scene[8, 9]. We compare different classical machine learning and neural network-based methods for this task. As part of classical methods, we utilize decision trees[10, 11] and random forests[12, 13, 14]. Thus, in this research work, we try to understand classical machine learning and neural network architectures and their effects.
Decision Tree and Random Forests: A decision tree is created using a binary split which decides the branch to allocate for a data sample. The quality of a split is decided by a measure of impurity, e.g. "gini", which can be similar to the sum of the standard deviation of samples lying on each side of the split[15, 16], hence the best split is likely to have the least "gini" score. Refer to Figures 6 to 9 to see decision tree structures. Decision trees can face the issue of overfitting which can be avoided by using random forests[12, 13, 14]. The basic idea behind random forests is to create lots of large decision trees such that their predictions are uncorrelated[14] and then take the average of their predictions, which is also called bagging[9]. There are different approaches to create uncorrelated models, e.g. by training them on different subsets of data, by considering a random subset of columns for each split, etc[12, 13, 14]. Random forests have been shown to work quite well in practice, which is also evident from this work.
Our major contributions in this work are listed below:
* We systematically create our own datasets to bring out the differences in performance of different methods.
* We carefully split the datasets into training, validation and test sets to test the generalization capabilities of different methods across dataset sizes.
* For fair evaluation of the methods, we do multiple runs of each method to obtain statistical results. We also consider different metrics for both regression-based evaluation and accuracy-based evaluation.
* We also list specific examples to observe the overfitting behavior of decision trees and random forests which is not observed in the neural networks.
* We also perform hyper-parameter tuning of the neural networks and provide our observation as part of the ablation studies.
allow for the fine-tuning of the hyperparameters of the neural networks on the validation set which can later be tested on the unseen and unbiased test set, whose samples follow the same distribution as the training and validation set.
The training set of size 90,000 represents 9% of the total possible 6-digit numbers. This can help us understand the generalization of the performance of machine learning models to unseen 6-digit numbers. To further challenge the generalizability of the models and test their capabilities to learn from limited data, we also considered a 10-digit numbers dataset as a 90,000-sized training set represents only 0.0009% of the total possible 10-digit numbers. We show that this change in the fraction of seen dataset (from 9% to 0.0009%) has the least effect on the performance of the neural networks [1, 2] as compared to the classical machine learning models [10, 11, 12, 13, 14].
### Implementation
For the implementation of the different machine learning models, we extensively used Jupyter Notebooks with the _scikit learn_[17] and _fastai_[18] libraries. While _scikit learn_[17] has several built-in classical ML models, _fastai_[18] has implementations of several state-of-the-art deep learning models. Using these libraries help us overcome the challenge of tediously and manually assigning all hyperparameters and thus allows us to quickly experiment with multiple methods and techniques.
We decided to use the decision tree and random forest regressor as classical ML models. Decision trees [10] build regression or classification models in the form of a tree structure. At every node, it splits a dataset into two subsets such that the "gini" score is minimized, to incrementally develop the decision tree. The final result is a tree with decision nodes and leaf nodes. A random forest [13, 14] is a meta-estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and avoid over-fitting.
Figure 2: 6-Digit Original Dataset: a sequence of 6-digit number (rightmost column) and the corresponding count of each digit
The dataset follows a specific labeling pattern, hence we believe that the decision tree could, perhaps, identify the necessary comparisons to perfectly, or nearly perfectly, predict the pattern. Random forest in general is the best performing and the most versatile classical ML model and is a key reason for its widespread popularity and, thus, also stood out as a possibly strong baseline.
Let \(x_{i}\) be the \(i^{th}\) number or sample for 1\(\leq\)\(i\)\(\leq\)\(n\), let \(y_{i}\) be the ground-truth label vector for the \(i^{th}\) number such that \(y_{ij}\) is the count of \(j^{th}\) digit for 0\(\leq\)\(j\)\(\leq\)9, and \(\overset{\wedge}{y_{i}}\) be the predicted vector for the \(i^{th}\) number such that \(y_{ij}^{\ \wedge}\) is the count of \(j^{th}\) digit for 0\(\leq\)\(j\)\(\leq\)9.
The regression performance metrics we consider are root mean squared error and mean absolute error, the two popular metrics in regression, and the classification metric we consider is accuracy. Root mean squared error is calculated as
\[RMSE\ =\ \sqrt{\sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{{{{\left({{y_{ ij}-{y_{ij}}}}\right)}^{2}}}}{{n!}}}\]
and the mean absolute error is calculated as
\[MAE\ =\ \sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{{{{\left|{{y_{ij}-{y_{ ij}}}}\right|}}}}{{n!}}\]
, where \(n\) is the total number of samples (or numbers), \(l\) is the length of output vector (which is 10 for the count of 10 digits), \(y_{i}\) is the \(i^{th}\) ground-truth label vector; and \(\overset{\wedge}{y_{i}}\) is the \(i^{th}\) predicted vector.
Figure 3: 10-Digit Original Dataset: a sequence of 10-digit number (rightmost column) and the corresponding count of each digit
The problem statement can be tackled either using a regression method or classification method. The count of each of the 10 digits is only limited to integers 0 to 6 for the 6-digit set and 0 to 10 for the 10-digit set. However, if we consider a classification method, the presence of different digits would require an excessively complex and yet underperforming multi-class multi-label classification method which may easily overfit the small fraction of real data we have.
Therefore, to tackle this problem, we first implemented multi-class regression models and generated the two error metrics and, then modified the predictions to be rounded off to the nearest whole number (predictions less than zero rounded up to zero and those more than the total number of digits rounded down to the total digits themselves (6 and 10 respectively). We can therefore also consider accuracy metric over these predictions which we define as:
\[Accuracy\ =\ \sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{I(y_{ij}=y_{ ij}^{\ \
All the neural networks were composed of input layers, dense linear layers, and dense non-linear layers, which implement ReLUs (Rectified Linear Units)[3] as activation functions, SGD[1, 2, 3], and Adam optimizers[19]. For reference, a ReLU layer is used to implement a non-linearity in the neural network to better trace a non-linear pattern, which is essentially an identity function for all non-negative values, and zero for negative values.
### Experiments
The results show that neural networks performed significantly better than the decision tree and random forest models, especially when using the modified dataset. The best results were obtained by using the appropriate number of layers, learning rate, and number of epochs.
Figure 4: 6-Digit Original Dataset with 16 columns: a sequence of 6-digit (rightmost 6 columns) and the corresponding count of each digit (left columns)
Figure 5: 10-Digit Original Dataset with 20 columns: a sequence of 10-digit (rightmost 10 columns) and the corresponding count of each digit (left columns)
The results are shown in Tables 1, 2, 3, and 4. For reference, the following keys are provided to identify the different models:
* Decision Tree trained on the original dataset
* Random Forest trained on the original dataset
* Decision Tree trained on the modified dataset
* Random Forest trained on the modified dataset
* _fastai.tabular_ implemented neural network[20]
* _fastai.tabular_ neural network implemented with a hidden embedding[20].
We report RMSE, MAE and Accuracy metrics for each of the methods. We run each method multiple times on the validation set to obtain statistical errors. The results are consistent for both the 6-digit and 10-digit datasets, and by employing both the regression and classification metrics. However, it is key to note that even the neural networks do not have perfect accuracy but it is almost 100%.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Method** & **RMSE** & **MAE** & **Accuracy** \\ \hline Decision Tree 1 & 0.998 & 0.693 & 43.986\% \\ \hline Decision Tree 2 & 1.018 & 0.712 & 43.198\% \\ \hline Random Forest 1 & 0.864 & 0.666 & 44.545\% \\ \hline Random Forest 2 & 0.620 & 0.495 & 52.827\% \\ \hline Neural Network & 0.303 & 0.216 & 97.833\% \\ \hline Neural Network \(+\) Embedding & 0.274 & 0.208 & 97.920\% \\ \hline \end{tabular}
\end{table}
Table 4: 10-Digit Test Set
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Method** & **RMSE** & **MAE** & **Accuracy** \\ \hline Decision Tree 1 & 0.997\(\pm\)0.000 & 0.693\(\pm\)0.000 & 44.167\% \\ \hline Decision Tree 2 & 1.021\(\pm\)0.001 & 0.714\(\pm\)0.000 & 42.994\% \\ \hline Random Forest 1 & 0.862\(\pm\)0.000 & 0.666\(\pm\)0.000 & 44.583\% \\ \hline Random Forest 2 & 0.623\(\pm\)0.001 & 0.499\(\pm\)0.001 & 53.019\% \\ \hline Neural Network & 0.293\(\pm\)0.025 & 0.221\(\pm\)0.018 & 98.256\% \\ \hline Neural Network \(+\) Embedding & 0.210\(\pm\)0.014 & 0.162\(\pm\)0.010 & 96.965\% \\ \hline \end{tabular}
\end{table}
Table 3: 10-Digit Validation Set. For statistical error, each method was run 5 times.
networks are only slightly affected or nearly unaffected by the increase in the digits, especially considering the large difference in the proportionality of more possible values in 6-digit and 10-digit numbers as mentioned earlier.
Modified dataset effect: It is observed that the modified dataset improves the performance of both decision trees and random forests, however, substantially more for random forests. This could be attributed to the tendency of random forests to generate many decision trees over multiple different features, instead of a single feature which generated the one and only possible tree given in the figures below. The averaging process of random forests [12, 13] over several decision trees in the modified dataset and on multiple batches of random, unbiased data is responsible for generating different outputs every time they are run and causing substantially less error and more accuracy compared to the performance on the original dataset.
This could also be the explanation for the decision trees and random forests generating exactly the same performance consistently on the original datasets for both 6-digit and 10-digit numbers across multiple runs, thus, having no change in the statistical error, as only a single decision tree is possible and only a single set of decision trees and their respective batches are being computed in the random forest.
Decision tree overfits: As we used decision tree analysis methods, it was observed that the decision tree had created over 85,000 leaf nodes for the training dataset of 90,000 numbers for both datasets, which is a clear example of an overfitting and memorizing model.
The random forest model performed slightly better than the decision tree model; however, it is worth mentioning that as a random forest creates many decision trees on unbiased data and bags them together, it will always outperform decision trees. It is also worth noting that the decision tree created many numerical splits to make nodes and for inference, it simply outputs the average of the count of each digit across numbers reaching a leaf node during training, refer to Figure 6, Figure 7, Figure 8 and Figure 9, which shows that both the classical ML models clearly could not interpret any patterns.
Figure 7: First 6 nodes of the decision tree for the modified 6-digit training dataset.
Figure 8: First 6 nodes of the decision tree for the original 10-digit training dataset (a): the top part, and (b): the lower part of the decision tree.
Figure 9: First 6 nodes of the decision tree for the modified 10-digit training dataset (a): the top part, and (b): the lower part of the decision tree.
We also experimented with a handful of outlier data points or numbers to observe predictions of the classical ML models.
For the original 6-digit dataset we tried the two pairs of consecutive numbers: (999998, 9999999) and (100000, 100001). The decision tree predicted [0, 0, 0, 0, 1, 0, 0, 0, 0, 5] for both numbers of the first pair and [4, 2, 0, 0, 0, 0, 0, 0, 0, 0] for both numbers of the second pair, and random forest after undergoing the classification modification predicted [0, 0, 0, 0, 0, 0, 0, 1, 0, 5] for the first pair and [4, 2, 0, 0, 0, 0, 0, 0, 0, 0] for the second pair. Rerunning the classical ML models on the modified dataset still generated similar results: the decision tree predicted [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 5] for the first pair and [3, 3, 0, 0, 0, 0, 0, 0, 0, 0] for the second pair, and random forest after undergoing the classification modification predicted [0, 0, 0, 0, 0, 0, 0, 1, 5] for first pair and [3, 3, 0, 0, 0, 0, 0, 0, 0, 0] for the second. Thus these classical methods are making the same prediction for the successive numbers. This shows the inherent limitation of the decision tree and random forest, as they are splitting the nodes based on the numeric values of the numbers and not the count of each digit.
For the 10-digit dataset, we tried the two pairs of numbers: (9999999999, 9999999998) and (100000000, 100000001). The decision tree predicted [0, 1, 1, 0, 2, 0, 0, 0, 0, 6] for the former and [4, 2, 0, 1, 2, 0, 0, 0, 1, 0] for the latter. The random forest, whereas, predicted [0.02, 0.61, 0.71, 0.26, 1.31, 0.2, 0.29, 0.35, 0.75, 5.5] for the former and [3.57, 1.67, 0.52, 0.95, 1.81, 0.05, 0.4, 0.02, 0.57, 0.44] for the latter which after the classification modification are [0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 6] and [4, 2, 0, 1, 2, 0, 0, 0, 1, 0] respectively. The results are similar for the modified dataset. Evidently, this is another indication of the memorization that these classical ML models underwent and how they failed poorly in pattern recognition, which is even more evident in the 10-digit dataset.
### Observations on Neural Networks
The neural networks, as aforementioned, outperformed classical ML models in every scenario and for both datasets. According to our hyperparameter optimization, we found the following best values for all the different scenarios using 16 epochs and [x,y,z] layers, where x,y, and z respectively are the number of parameters in each of the non-linear (ReLU [6]) hidden layers:
* Layers = [96,96,96], Learning Rate = 0.01 \(\circ\) Neural Network with Embedding
- Layers = [96,96,96], Learning Rate = 0.01, Embeddings are [10,100] by considering each of the 10 unique digits
* Layers = [128,128,128], Learning Rate = 0.01 \(\circ\) Neural Network with Embedding
- Layers = [256,256,256], Learning Rate = 0.005, Embeddings are [10,100] by considering each of the 10 unique digits
It could be hypothesized that as the neural networks utilize stochastic gradient descent to minimize loss by altering the parameters or weights and implement non-linearities through the ReLU layers, they at least trace out the non-linear pattern very well1,2. The 100-dimensional embeddings were used as an input feature for each of the ten possible values. Overall they did not significantly alter the predictions across the different metrics.
It is an intriguing detail that the classical ML models, which gave an accuracy of nearly 90% for 6-digit numbers, although by memorization, fell to less than or nearly 50% accuracy for 10-digit ones. On the contrary, neural networks hardly changed by even 1% in accuracy across datasets. They also produced less than half the errors compared to the best classical ML model baseline, which is the random forest, in both metrics. The following loss curve vs the number of epochs graphs, refer to Figure 10(a), 10(b), 10(c) and 10(d), indicate that the neural networks did not undergo any form of overfitting or memorization. This shows the generalization capability of neural networks.
Similar to the classical ML models, we also worked with the following consecutive numbers for the neural networks: 6-digit numbers - (99999, 999998) and (100000, 100001); 10-digit numbers - (9999999999, 999999998) and (100000000,100000001). Here are firstly the results by ChatGPT3 when asked for the task for recognizing the frequency of each digit in the above numbers, refer to Figure 11(a), 11(b), 11(c), 11(d), 11(e), 11(f).
Figure 10: **(a)-(d):** _Loss (MSE) Curves for Neural Networks vs Number of Epochs_
To summarize the results, except for the number 9,999,999,999 which it predicted completely correctly, all the predictions by ChatGPT3 were even worse than the classical ML models. This further showcases the deceptiveness of the simplicity of the task. The neural networks, on the other hand, produced the following results after the classification modification:
* Digit Dataset:
Figure 11: **(a) - (b):**_ChatGPT3 responses for the above-mentioned numbers_
* Input: (999999, 999998) and (100000, 100001)
* Neural Network output: [0,0,0,0,0,0,0,0,0,5] and [0,0,0,0,0,0,0,0,1,4] for the former pair, and [5,1,0,0,0,0,0,0,0,0] and [4,2,0,0,0,0,0,0,0,0] for the latter.
* Neural Network with Embedding output: [0,0,0,0,0,0,0,0,6] and [0,0,0,0,0,0,0,0,0,1,5] for the former pair, and [5,3,0,0,0,0,0,0,0,0] and [3,3,0,0,0,0,0,0,0,0] for the latter.
* 10 - Digit Dataset:
* Input: (9999999999, 9999999998) and (1000000000,1000000001)
* Neural Network output: [0,0,0,0,1,1,0,0,0,9] and [0,0,0,0,1,1,0,0,1,8] for the former pair, and [7,2,1,0,1,0,0,1,0,0] and [7,2,1,0,0,0,1,0,0,0] for the latter.
* Neural Network with Embedding output: [0,1,0,0,0,1,0,2,2,9] and [0,0,0,0,1,0,0,1,9] for the former pair, and [9,1,0,0,0,1,0,0,0,0] and [9,2,0,0,0,0,2,0,0,1] for the latter.
Interestingly, half of these predictions are incorrect but the other half are either completely correct or close to it with one or so digits wrong. They, at least, do not make the exact same prediction for the successive numbers unlike the classical ML models which means that they are partially learning the pattern. However, similar to classical ML models, their performance significantly worsens for 10-digit numbers as well. The proportion of data seems to play a significant role in the performance of all the models but with varying degrees.
#### Ablation Study
When running the neural networks on the 6-digit and 10-digit test sets, we found some alternative hyperparameter values, learning rate (lr) and layers, which gave significantly better outputs in terms of the regression metrics. We have mentioned them in the table given below, refer to Table 5.
## Conclusion
In this research work we compared the performance of different classical machine learning models and neural networks in identifying the frequency of occurrences of each digit in a given
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**6-Digit Test Set** & **Hyperparameters** & **RMSE** & **MAE** \\ \hline Neural Network + Embedding & lr = 1e-5, layers = (96,96,96) & 0.093 & 0.073 \\ \hline
**10-Digit Test Set** & & & \\ \hline Neural Network & lr = 0.003, layers = [128,128,128] & 0.171 & 0.130 \\ \hline Neural Network + Embedding & lr=5e-3, layers = [256,256,256] & 0.221 & 0.168 \\ \hline \end{tabular}
\end{table}
Table 5: Alternative hyperparameter values for neural networks on the test sets
number. We observed that the neural networks significantly outperformed the classical ML models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets.
We discovered that some of the behaviors of the classical machine learning models such as split condition and averaging made the trees extremely biased and led to overfitting and memorization. Thus they failed in pattern recognition. The neural networks, on the other hand, thanks to their non-linear optimization were substantially more successful in recognizing the evident pattern. The accuracy was greater than 95% for all scenarios which indicates that the deep learning models did, in fact, learn the pattern accurately. This research further acknowledges the vast learning capabilities and adaptability of neural networks that have been stated in previous research work.
All the experiments were conducted on a MacBook M2 Air in a matter of two months. With more time, one could potentially extend the research to other datasets with larger numbers of digits and may find various other trends with neural networks. Regardless, they already seem to be reliable in learning this unconventional, yet simple pattern.
Furthermore, despite the research being experimental in nature, the results obtained in this research can potentially be applied to downstream computer vision problems, such as counting the number of times a specific object occurs in an image, which is an essential task in many computer vision applications [3, 5, 15, 16]. Also, the ability to detect the most frequent elements can be used to detect the rare elements, which can have applications in healthcare, e.g. to detect rare diseases.
## Acknowledgement
I would like to acknowledge the unconditional support and guidance offered from my mentor Mr. Viveka Kulharia, PhD in Computer Vision from the University of Oxford, for assisting me in everything, from researching the idea through his resources to writing the paper.
|
2309.09076 | Dynamical Phonons Following Electron Relaxation Stages in Photo-excited
Graphene | Ultrafast electron-phonon relaxation dynamics in graphene hides many distinct
phenomena, such as hot phonon generation, dynamical Kohn anomalies, and phonon
decoupling, yet still remains largely unexplored. Here, we unravel intricate
mechanisms governing the vibrational relaxation and phonon dressing in graphene
at a highly non-equilibrium state by means of first-principles techniques. We
calculate dynamical phonon spectral functions and momentum-resolved linewidths
for various stages of electron relaxation and find photo-induced phonon
hardening, overall increase of relaxation rate and nonadiabaticity as well as
phonon gain. Namely, the initial stage of photo-excitation is found to be
governed by strong phonon anomalies of finite-momentum optical modes along with
incoherent phonon production. Population inversion state, on the other hand,
allows production of coherent and strongly-coupled phonon modes. Our research
provides vital insights into the electron-phonon coupling phenomena in
graphene, and serves as a foundation for exploring non-equilibrium phonon
dressing in materials where ordered states and phase transitions can be induced
by photo-excitation. | Nina Girotto, Dino Novko | 2023-09-16T18:55:29Z | http://arxiv.org/abs/2309.09076v1 | # Dynamical Phonons Following Electron Relaxation Stages in Photo-excited Graphene
###### Abstract
Ultrafast electron-phonon relaxation dynamics in graphene hides many distinct phenomena, such as hot phonon generation, dynamical Kohn anomalies, and phonon decoupling, yet still remains largely unexplored. Here, we unravel intricate mechanisms governing the vibrational relaxation and phonon dressing in graphene at a highly non-equilibrium state by means of first-principles techniques. We calculate dynamical phonon spectral functions and momentum-resolved linewidths for various stages of electron relaxation and find photo-induced phonon hardening, overall increase of relaxation rate and nonadiabaticity as well as phonon gain. Namely, the initial stage of photo-excitation is found to be governed by strong phonon anomalies of finite-momentum optical modes along with incoherent phonon production. Population inversion state, on the other hand, allows production of coherent and strongly-coupled phonon modes. Our research provides vital insights into the electron-phonon coupling phenomena in graphene, and serves as a foundation for exploring non-equilibrium phonon dressing in materials where ordered states and phase transitions can be induced by photo-excitation.
phonon dynamics, electron-phonon coupling, graphene, density functional theory
## 1 Introduction
Phonon dynamics, electron-phonon coupling, graphene, density functional theory
Through the ionic motion manipulation, photoexcitation as in pump-probe setup is a powerful tool which paves the way for a highly effective designing and customizing the desired functionalities of materials [1, 2]. Namely, it can induce novel phases [3], sometimes unreachable in equilibrium, opening, for example, the possibility of light-induced superconductivity [4, 5], charge-density-wave order [6, 7], ferroelectricity [8], and disorder-assisted structural transition [9]. Very often, these states of matter are characterized with a strongly-coupled phonon mode and considerable electron-phonon coupling (EPC), which are believed to be additionally renormalized in a photo-excited non-equilibrium regime [10, 11]. For instance, photo-induced softening of the relevant phonon mode is quite common in ultrafast dynamics [12, 13], nonetheless, photo-excitation can in some cases lead to phonon hardening and consequently stabilize the structural phase [14, 15, 16, 17, 18].
Graphene exhibits extraordinary mechanical, transport and optoelectronic properties [19], and is therefore an ideal platform to investigate the fundamentals of ultrafast electron-lattice dynamics. Experimental techniques such as time- and angle-resolved photoemission spectroscopy (tr-ARPES) [20, 21, 22, 23, 24], two-photon photoemission [25, 26, 27], and transient optical spectroscopy [28, 29, 30] have reveled various aspects of carrier thermalization in graphene, such as rapid electron-electron and electron-phonon recombination of highly non-equilibrium distribution towards population inversion [21], scattering of electrons with strongly-coupled optical phonons [22, 31], cooling of hot carriers via acoustic phonons [22], as well as electron band renormalization [30]. On the other hand, non-equilibrium phonon dynamics in graphene, phonon relaxation pathways and the corresponding photo-induced phonon dress
ing are less investigated. Raman and coherent phonon spectroscopy have demonstrated considerable phonon hardening of the \(E_{2g}\) optical phonon mode [14, 15, 32], which was attributed to the reduction of the nonadiabatic electron-phonon interaction in non-equilibrium [14]. Recent attosecond core-level spectroscopy also uncovered ultrafast phonon stiffening of both zone-center \(E_{2g}\) and zone-edge \(A^{\prime}_{1}\) phonon Kohn anomalies [33].
The theoretical studies of the aforesaid ultrafast phenomena are mostly based on the two- and multi-temperature models [34, 22, 35], as well as on the time-dependent Boltzmann equations [29, 36], and were proven to be valuable in comprehending energy transfer between electrons and strongly-coupled hot phonons. However, these methods do not account for transient phonon renormalization and as such are not suitable for exploring all aspects of EPC and phonon dynamics far from equilibrium, such as structural transitions and soft phonon physics. The phonon renormalization in graphene was recently inspected by means of real-time time-dependent density functional theory in combination with molecular dynamics and real-space lattice distortions, which allows for time-resolved self-consistent renormalization of phonons and EPC strengths [11]. However, since it relies on real-space distortions within the supercell approach it is able to track the phonon dynamics of only few selected modes. In addition, the study delivered somehow conflicting results, i.e., instead of phonon hardening and electron-phonon decoupling as observed in the experiments [14, 15, 32, 33], the phonon softening and enhanced EPC strength were reported [11]. This calls for further theoretical insights on non-equilibrium phonon dynamics in graphene.
Here, we overcome these difficulties and investigate the effects of the photo-excited population on phonon dynamics and EPC in graphene by means of constrained density functional perturbation theory (cDFPT) [37, 38]. Important advantage of this approach is that it provides a full momentum-dependent picture of phonon renormalization due to constrained photo-excited electron distribution [39, 40, 41, 42, 43, 37]. In addition, we combine cDFPT with nonadiabatic phonon self-energy calculations in order to provide information on phonon relaxation rates (linewidths) and nonadiabatic frequency modifications, which are absent in the standard adiabatic cDFPT studies. We discuss phonon renormalization for the usual stages of carrier relaxation in graphene, namely, for strong non-equilibrium, population inversion, and hot (Fermi-Dirac) carrier distribution. We observe remarkable modifications of the well-known Kohn anomalies at the \(\Gamma\) and K points of the Brillouin zone, as well as appearance of the additional phonon anomalies away from the Brillouin zone center induced by non-equilibrium population, renormalizing the equilibrium dispersion for up to 6 meV. Light-induced increase of the overall phonon linewidth and nonadiabatic effects are observed along with a striking phonon gain, where the latter becomes coherent once graphene reaches the state of photo-inversion. From the fermiology analysis we show that the EPC coupling matrix elements are slightly reduced in non-equilibrium state, while the observed features mostly stem from the modified scattering phase space under transient conditions. With this work, we expand our microscopic understanding of phonon relaxation dynamics of graphene in far non-equilibrium, which along with the well-explored electron thermalization paths constitutes the full dynamical picture of electron-phonon scatterings in graphene.
Photo-excitation implies promoting a certain carrier density form the valence to the conduction band separated by the energy of laser pulse \(\hbar\Omega\), which typically amounts to 1.5 eV. In this work, we track phonon properties for various electron distributions inside the Dirac cone following directly the pulse application. First, an equilibrium distribution [Fig. 1 (a)] is disrupted with a short (fs) pulse causing an occurrence of empty states below the Dirac point and filled states above it [Fig. 1 (b)]. Photo-generated electrons and holes establish separate distributions and begin a rapid process of thermalization and cooling through carrier-carrier and carrier-phonon scatterings. The energy transfer to the strongly-coupled optical phonons produces hot phonons, reported to exist on a femtosecond time-scale [28, 33]. Then, the photo-inverted state is established [Fig. 1 (c)] through the competition between phonon-induced intraband scattering and Auger recombination [44]. The formation of the population inversion has already been thoroughly explored with tr-ARPES [21, 22, 45], which reveals its relaxation time of \(\sim\)100 fs. After its decay electrons follow a Fermi-Dirac distribution at elevated
temperatures [Fig. 1 (d)]. The whole process of electron thermalization conceptually follows the one described for graphite in Ref. [20]. Subsequent hot-carrier cooling is governed by phonon-assisted scatterings on the time scale of \(1-10\,\mathrm{ps}\).
We implemented the nonequlibrium distributions in the calculation of electronic-structure properties and then calculated the renormalization of phonons with the PHonon[46, 47] and EPW [48, 49] codes in the adiabatic and nonadiabatic approximations (see Supporting Information (SI) for more details). The resulting adiabatic phonon dispersions are shown in Fig. 2. We compare the phonon dispersion for the case of photo-excited electron population as in Fig. 1(b) and photo-inverted distribution Fig. 1(c) with the equilibrium adiabatic case. The largest effects of the nonequilibrium electron distribution happen for the strongly-coupled optical modes \(E_{2g}\) and \(A^{\prime}_{1}\) around \(\Gamma\) and K points, respectively. We observe phonon hardening for both phonons, and it ranges from \(\simeq 1\,\mathrm{meV}\) for the \(E_{2g}\) mode and \(4-6\,\mathrm{meV}\) for the \(A^{\prime}_{1}\) mode. Our results are in a good agreement with Refs. [14, 15, 32] and especially with Ref. [33] where the attosecond core-level spectroscopy demonstrated that the Raman-inactive \(A^{\prime}_{1}\) mode is the dominating channel for dissipation of electronic coherence due to stronger coupling to electrons [50]. Note also that in our calculations the acoustic sum rule is not fully fulfilled because we inhibit the long-wavelength dipole-allowed transitions [51, 52] by filling the states above the Fermi energy.
Figure 2: Adiabatic DFPT phonon dispersion in the presence of a photo-excited (red line) and photo-inverted (yellow line) electron distribution in comparison with an equilibrium result (grey dashed line). Two insets are a zoom in region around high-symmetry points (\(\Gamma\) and K), showing the hardening of strongly-coupled optical modes (\(E_{2g}\) and \(A^{\prime}_{1}\)) along with a schematics of the corresponding atomic motions.
Figure 1: Schematic representation of different stages of electron relaxation in photo-excited graphene. (a) Dirac cone with an equilibrium Fermi-Dirac electron distribution. (b) At \(t_{0}\), laser pulse excites electrons form the [\(\varepsilon_{1}\), \(\varepsilon_{2}\)] energy interval vertically upwards in the conduction band, where they fill the states in the [\(\varepsilon_{3}\), \(\varepsilon_{4}\)] range. (c) Immediately after the pulse, electrons scatter with other electrons and strongly-coupled phonons (\(E_{2g}\simeq 200\,\mathrm{meV}\) and \(A^{\prime}_{1}\simeq 160\,\mathrm{meV}\)) until the population inversion is created. (d) When the time scale of electron thermalization time (\(\tau_{\mathrm{th}}\)) is reached, electrons follow a hot Fermi-Dirac distribution, which in our calculations, amounts to 2200K.
In Fig. 3 we present the results of the non-equilibrium EPC calculations as obtained with cDFPT. They contain the phonon spectral function \(B_{\nu}^{c}(\mathbf{q},\omega)\), which incorporates nonadiabatic renormalization effects and transverse and longitudinal optical phonon linewidth along the \(\Gamma-\mathrm{K}\) path. With the term "adiabatic" we refer to a calculation where the phonon energy \(\omega\) is omitted in the phonon self-energy calculations, and by "nonadiabatic" or "dynamic" where it is included [53, 54] (see also SI). The first row represents four spectral functions corresponding to the four distinct electron distributions from Fig. 1, together with the equilibrium adiabatic result for comparison. Equilibrium graphene linewidth contributions for the \(E_{2g}\) and \(A_{1}^{\prime}\) modes come from the vertical interband electron scattering inside the Dirac cone (\(q\simeq\Gamma\)) or between the two neighboring ones (\(q\simeq\mathrm{K}\)). The features around \(\Gamma\) and \(\mathrm{K}\) points are symmetrical, but differ due to the disparate EPC strengths.
The photo-excited electron distribution opens up new scattering possibilities, which are schematically shown with arrows in Fig. 3(f). Besides the significant modifications of the well-known nonadiabatic Kohn anomalies [55, 56] at the \(\Gamma\) and \(\mathrm{K}\) points coming from the non-equilibrium population, the spectral function shows additional new anomalies further away from the \(\Gamma\) and \(\mathrm{K}\) points. These photo-induced dynamical phonon anomalies come from the electron transitions away from the Dirac point, at the photo-doped regions. Compared to the equilibrium adiabatic dispersions, a \(4\,\mathrm{meV}\) renormalization is visible directly in the \(\Gamma\) point for the \(E_{2g}\) mode, and away from it, the highest optical branch is renormalized by \(5\,\mathrm{meV}\). For the \(A_{1}^{\prime}\) mode, we observe a \(5\,\mathrm{meV}\) hardening and a \(6\,\mathrm{meV}\) modification at the intersection of the two optical branches. We note that these sharp transient frequency modifications are quite large and are comparable to the nonadiabatic frequency shifts of the \(E_{2g}\) mode in the highly-doped graphene [56]. In the case of population inversion, the non-equilibrium
Figure 3: Dynamical phonon spectral functions at different stages of electron relaxation in comparison with the adiabatic equilibrium DFPT result (grey dashed line), i.e., for: (a) Equilibrium regime, (b) far non-equilibrium following the laser excitation, (c) population inversion, (d) hot equilibrium distribution. Note the strong renormalizations occurring near the two strongly-coupled optical modes (\(E_{2g}\) and \(A_{1}^{\prime}\)). Also, the negative linewidth contribution to spectral function is shown in teal. The histograms in the insets of (b)-(d) show the nonadiabatic corrections to the \(E_{2g}\) and \(A_{1}^{\prime}\) modes (\(\Delta^{\mathrm{NA}}=\omega^{\mathrm{NA}}-\omega^{\mathrm{A}}\)). (e) Phonon linewidth on \(\Gamma\) to \(\mathrm{K}\) path of LO/TO optical modes, due to EPC. Color-coding is the same as in Fig. 2. (f-g) Intra- and inter-valley (i.e., \(\mathrm{K}\rightarrow\mathrm{K}\) and \(\mathrm{K}\rightarrow\mathrm{K}^{\prime}\)) electron transitions which contribute to the phonon self-energy. The color-coded arrows reveal positive (brown) and negative (teal) contributions to the linewidth. (h) Same as in (e) but for the photo-inverted (yellow) and hot electron distributions (dark red).
electron distribution is condensed in the vicinity of the Dirac point, bringing the non-equilibrium spectral features closer to the \(\Gamma\) and K points. We again observe phonon renormalization for the \(E_{2g}\) and \(A^{\prime}_{1}\) modes of about 2 meV for both. Interestingly, for the strong non-equilibrium and population inversion we observe additional phonon hardening (softening) for \(E_{2g}\) (\(A^{\prime}_{1}\)) when the nonadiabatic effects are taken into account [insets of Figs. 3(b) and 3(c)]. In fact, significant increase of the nonadiabatic correction is obtained for the strong non-equilibrium, while it is reduced for the population inversion and almost diminished for the hot equilibrium case. Note that this is contrary to the conclusions drawn in Ref. [14], where the decrease of the nonadiabaticity is suggested.
The corresponding contributions to the linewidth [Figs. 3(e) and 3(h)] show that values at the \(\Gamma\) and K points are unaltered, while slightly away from these points it is significantly enhanced compared to its value at equilibrium. For highly non-equilibrium case, additional notable phonon broadening arise displaced from the high-symmetry points, at momenta where the new dynamical anomalies appear. As stated, these linewidth features stem from the electron transitions between the photo-excited block of filled states above the Dirac point, and empty block below it.
A crucial thing to notice is that in the photo-excited state, electrons can scatter from the filled states at higher energies to the low-energy empty states [see Fig. 3 (f-g), downwards pointing teal arrows], causing a negative linewidth contribution. This phonon gain, happens in the immediate vicinity of dynamical anomaly. For the population inversion, the phonon-gain contributions are located directly at the \(\Gamma\) and K high-symmetry points. In graphene, acoustic phonon generation was experimentally achieved [57] and theoretically explained [58]. A conceptually similar phenomenon has recently been widely explored in graphene, namely the photo-induced plasmon amplification with a high potential for the development of novel optoelectronic devices [59, 60, 61, 62, 63, 64, 65, 66, 67, 68]. Our observation of phonon gain is in agreement with the observation of the negative plasmon linewidth and negative conductivity in the photo-inverted state, and it simply means that hot phonons are emitted in the non-equilibrium regime. In particular, the results show that far non-equilibrium state supports the generation of incoherent hot phonons with momenta slightly away from the high symmetry points (i.e., phonon displacement pattern is shifted in phase between neighbouring unit cells), while the population inversion supports coherent phonon generation of hot \(E_{2g}\) and \(A^{\prime}_{1}\) (i.e., phonon displacement pattern is repeated between neighbouring unit cells). This could explain, on the one hand, the reduction of the phonon dephasing rate of \(E_{2g}\) mode as reported in Ref. [14], and, on the other hand, the non-displacive mechanism for the generation of both \(E_{2g}\) and \(A^{\prime}_{1}\) hot coherent phonons as obtained in attosecond core-level spectroscopy [33].
As for the hot electron distribution, we calculated the number of carriers located above the Dirac point in the state of photo-inversion and found the temperature for which a Fermi-Dirac distribution produces the same number of carriers in the conduction band. We show the spectral function for the hot equilibrium electron distribution at \(T=2200\) K. The \(E_{2g}\) and \(A^{\prime}_{1}\) modes are slightly hardened, and here one can clearly see also the edge of the electron-hole pair excitation continuum. The linewidth resembles the one obtained for the photo-inverted population, only without the negative contributions.
Further analysis includes changing the excited carrier density, which experimentally corresponds to changing the laser fluence (Fig. 4). Effective carrier density is denoted in Fig. 4(a) above the Dirac cones, and is calculated as the summed density of photo-excited electrons and holes. As expected, we observe larger phonon stiffening in the DFPT calculation for a larger carrier density. Here we show only the adiabatic dispersions to focus solely on the increased phonon stiffening in the \(\Gamma\) and K points. The \(E_{2g}\) phonon hardening increases by 2 meV as the photo-excited density increases by \(6\times 10^{12}\) cm\({}^{-1}\). Further, we observe larger phonon linewidths deriving from the modified phase space with increasing carrier density. Directly in the \(\Gamma\) point the linewidth remains at its equilibrium value, while slight differences in the position of the peaks away from the high-symmetry points are visible. Furthermore, since in experiments graphene is frequently placed on a substrate, we also provide results for doped graphene, specifically for Fermi levels of \(E_{F}=200\) and 400 meV. We compare the photo-doped spectral function with the adiabatic DFPT equilibrium calculation for the corresponding
Fermi energy (dashed black lines). Again, we notice larger phonon hardening around the K point (5 meV for both dopings) then around the \(\Gamma\) point (4 meV for \(E_{F}\) = 200 meV and 1 meV for \(E_{F}\) = 400 meV). We observe dynamical anomalies at the same positions as they occur in the pristine photo-doped case. We notice how with increased doping, the dynamic phonon dispersion softens and the effects of photo-induced phonon hardening are less pronounced. Largest softening is observed around the dynamical anomalies. In general, for doped graphene, the intrinsic linewidths of the two highest optical modes are larger then for pristine graphene. When doped graphene is photo-excited, linewidth behaves in the same fashion as in the pristine photo-doped case, with the strong dynamical anomalies occurring in the vicinity of the high-symmetry points.
Finally, we present the analysis of the adiabatic phonon self-energy as calculated in cDFPT \(\pi_{\nu}^{c}(\mathbf{q})\)[69, 70] (see also SI). Averaging out the electronic degrees of freedom deriving from the EPC matrix elements, leads to \(|g_{\nu}^{nm,c}(\mathbf{k},\mathbf{q})|\)\(\rightarrow|g_{\nu}^{c}(\mathbf{q})|\) and the self-energy expression can be written as \(\pi_{\nu}^{c}(\mathbf{q})=|g_{\nu}^{c}(\mathbf{q})|^{2}\chi_{0}^{c}(\mathbf{q})\), where \(\chi_{0}^{c}(\mathbf{q})\) denotes the bare susceptibility function.
In this way, we can separate the non-equilibrium effects that come from the modifications in screened EPC matrix elements \(|g_{\nu}^{c}(\mathbf{q})|\) and the photo-induced changes in the available phase space via \(\chi_{0}^{c}(\mathbf{k})\). In Fig. 5 we show the analysis for the conduction band, but the results for the valence band are the same due to the electron-hole symmetry. In the first two columns we show the relative difference between the EPC matrix elements in the photo-excited and photo-inverted graphene with respect to the pristine equilibrium case for the \(E_{2g}\) and \(A_{1}^{\prime}\) modes \(\Delta^{rel}|g_{\nu}(\mathbf{k})|^{2}\)= \((|g_{\nu}^{c}(\mathbf{k})|^{2}\)\(-|g_{\nu}^{\text{eq}}(\mathbf{k})|^{2})/|g_{\nu}^{\text{eq}}(\mathbf{k})|^{2}\). In the first column, the relative differences are calculated after the electron-phonon matrix elements were summed throughout the whole Brillouin zone for a chosen \(\mathbf{q}\) point on the \(\Gamma\) - K path and for each one of the two highest optical modes. The largest value of the relative change is only \(\pm 10\%\) and it appears for those wavevectors \(\mathbf{q}\) for which the electronic transitions are forbidden by the specific electronic structure of graphene and for which the
Figure 4: (a) Non-equilibrium phonon renormalization for different densities of excited electrons. In the first row, we show the corresponding adiabatic DFPT dispersions in the vicinity of \(\Gamma\) and K points and observe phonon-hardening increase with photo-excited electron density. The second row contains the corresponding EPC induced linewidths. (b) Two cases of electron-doped photo-excited graphene (i.e., for \(E_{F}\) = 0.2 eV and \(E_{F}\) = 0.4 eV). In the first row, we show the corresponding dynamic spectral functions. The second row shows the linewidths for the two doping regimes. We observe the same features as in Fig. 3(e) with significant increase in the linewidth close to the \(\Gamma\) point. It derives from the larger electron density at the Fermi level. Again, note the negative linewidth occurrence and dynamical anomalies.
EPC strength is weak at equilibrium (i.e., away from the \(\Gamma\) and K points). More dramatic changes occur if \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) is resolved in the k\({}_{x}\)-k\({}_{y}\) plane [column (b) in Fig. 5]. Here we explicitly show the result for \({\bf q}=\Gamma\) for which we found the largest values of \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) (see SI for results in additional q points). For both the photo-excited and photo-inverted electron distribution case, our calculations show that \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) reaches values of \(\pm\) 40% in certain regions of k\({}_{x}\)-k\({}_{y}\) space for the E\({}_{2g}\) mode. We observe a symmetrical pattern of positive and negative contributions to \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\). Due to phase space restrictions, only a small region around the K points is picked out when doing a self-energy calculation, or multiplying \(|g_{\nu}^{c}({\bf q})|^{2}\) and \(\chi_{0}^{c}({\bf q})\). In other words, as shown in columns (c) and (d) of Fig. 5, \(\chi_{0}^{c}({\bf k})\) is finite around the Dirac points also in a symmetric pattern. In this way, when calculating the phonon self-energy and summing over the whole Brillouin zone, the net effect of the large relative changes \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) cancels out as equal amounts of positive and negative values are picked out by the \(\chi_{0}^{c}({\bf k})\) factor (see Sec. S2 in SI for more detailed discussion).
Therefore, the effects of photo-induced changes of the electron-phonon matrix elements are small in the end, and it turns out that the phonon hardening and decrease of the dephasing rate observed in Ref. [14], along with other non-equilibrium phenomena presented here, come dominantly from the photo-induced changes in the carrier scattering phase space \(\chi_{0}^{c}({\bf k})\). Therefore, optically-induced phonon blueshifts are not necessarily a sign of the suppressed coupling and it could come from the pure electronic origin, as was shown for instance in the case of photo-excited TiSe\({}_{2}\)[18]. This resolves the debate on whether the EPC strength is suppressed [14] or enhanced [11] in the non-equilibrium graphene.
We elaborate this claim by thoroughly inspecting the color-coded contributions to the bare electron susceptibility \(\chi_{0}^{c}({\bf k})\). The features for \({\bf q}\) = K and \({\bf q}\) = \(\Gamma\) are in essence the same, so the discussion is applicable to both. In the equilibrium case where the Fermi surface is almost a point, the only contribution comes from the Dirac point. Since the electrons can only scatter from the filled states below the Fermi energy to the empty states above it, the final results for \(\chi^{0}({\bf k})\) is negative. Focusing now on the photo-excited case (Fig. 5, first row), we first notice the mentioned equilibrium contribution (red dots) positioned directly in the Dirac points. Photo-excited electrons fill the states visible here as an additional triangle around each Dirac point. Each triangle consists of positive and neg
Figure 5: The analysis of static phonon self-energies, revealing the electronic processes behind phonon anomalies in the case of photo-excited (first row) and photo-inverted (second-row) electron distribution. (a) Relative change in the \({\bf k}\)-summed EPC matrix elements \(\Delta^{rel}|g_{\nu}({\bf k})|^{2}\) for the two highest optical modes and along the \(\Gamma-{\rm K}\) q-path. (b) \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) resolved in \({\bf k}\) space. (c),(d) \({\bf k}\)-resolved static susceptibility \(\chi_{0}^{c}({\bf k})\) for the \(E_{2g}\) and \(A_{1}^{\prime}\) modes, respectively. The positive (negative) contributions to both \(\Delta^{rel}|g_{E_{2g}}({\bf k})|^{2}\) and \(\chi_{0}^{c}({\bf k})\) are shown with red (blue).
ative susceptibility contributions. Electrons from the positive \(\chi_{0}^{c}(\mathbf{k})\) portion are responsible for the negative linewidth contribution, as they contribute by scattering from the higher energy filled states to the lower energy empty ones. Due to finite temperature, this is in principle also possible within an equilibrium distribution, but those contributions are then suppressed by the much larger negative \(\chi_{0}(\mathbf{k})\) contribution. The electrons from the negative \(\chi_{0}^{c}(\mathbf{k})\) section make standard transitions to higher energy empty states. In the EPC calculations, we used a dynamical susceptibility term [see Eq. (S1) in SI], which, together with varying the wavevector \(\mathbf{q}\), leads to the competition between these two contributions and, hence, to the net negative linewidth regions visible in Fig. 3. The obvious difference between susceptibility contribution shape in the \(\Gamma\) and K points, is the result of trigonal warping [71], which is reversed in the neighboring K points. Setting \(\mathbf{q}=\mathrm{K}\) leads to the superposition of two relatively rotated triangles, making the \(A_{1}^{\prime}\) susceptibility look circular. \(\chi_{0}^{c}(\mathbf{k})\) for the photo-inverted distribution, consists of two circular contributions, as the filled/empty states are in the energy range of a linear electron distribution. Color-coding again suggests that the electrons closer to the Dirac point rather scatter to the empty states below the Fermi level, while the higher energy electrons do the opposite and contribute positively to the phonon linewidth. In this case, varying the wavevector \(\mathbf{q}\) in the dynamical susceptibility calculation reduces the phase space for the electrons in the negative section, as a number of vertical transitions is restricted. It confines the phonon gain to be placed directly in the high-symmetry points.
Note in the end that the present analysis on phonon dynamics based in cDFPT is not only restricted to the non-equilibrium induced by laser pulses, but it could be utilized to study phonon renormalization in any out-of-equilibrium conditions. For instance, impact of the static electric fields and the corresponding non-equilibrium electrons on phonons in current-carrying materials is still not well understood despite its importance for comprehending various transport properties [72, 73, 74].
Understanding the mechanisms behind the out-of-equilibrium EPC provides valuable insights into the fundamental physics of graphene and helps unravel the complex interplay between charge carriers and lattice vibrations. We investigated the coupling of the high-energy optical phonon modes with the photo-excited electron distribution by means of cDFPT. We observed hardening of the well-known Kohn anomalies at the center and edge of the Brillouin zone. The latter comes from the modified phase space for electron transitions which leads to different screening effects, while the effective EPC coupling strenghts are only slightly changed. We obtained complex nonadiabatic EPC features that emerge in both the dispersion and linewidth, and mostly originate from the new scattering channels opened in non-equilibrium. For instance, sharp dynamical phonon anomalies away from the high-symmetry points and the overall increase of the phonon scattering rate have been observed. Also, we showed incoherent phonon gain at finite wavevectors irrespective of the doping level or the concentration of the photo-excited carriers, while coherent phonon generation is expected in the state of population inversion and is within the scope of typical experiments. We believe our work offers crucial information on the nature of EPC in graphene, justifies the known non-equilibrium features and sheds new light on the understanding of the underlying ultrafast vibrational relaxation mechanisms.
**Acknowledgement** Useful discussions with Jan Berges and Samuel Ponce are gratefully acknowledged. We acknowledge financial support from the Croatian Science Foundation (Grant no. UIP-2019-04-6869) and from the European Regional Development Fund for the "Center of Excellence for Advanced Materials and Sensing Devices" (Grant No. KK.01.1.1.01.0001).
## Supporting Information Available
More information on computational details and detailed analysis of the modifications in the electron-phonon matrix elements and scattering phase space due to non-equilibrium distribution |
2309.06345 | Coexistence of localized and extended states in the Anderson model with
long-range hopping | We study states arising from fluctuations in the disorder potential in
systems with long-range hopping. Here, contrary to systems with short-range
hopping, the optimal fluctuations of disorder responsible for the formation of
the states in the gap, are not rendered shallow and long-range when $E$
approaches the band edge ($E\to 0$). Instead, they remain deep and short-range.
The corresponding electronic wave functions also remain short-range-localized
for all $E<0$. This behavior has striking implications for the structure of the
wave functions slightly above $E=0$. By a study of finite systems, we
demonstrate that the wave functions $\Psi_E$ transform from a localized to a
quasi-localized type upon crossing the $E=0$ level, forming resonances embedded
in the $E>0$ continuum. The quasi-localized $\Psi_{E>0}$ consists of a
short-range core that is essentially the same as $\Psi_{E=0}$ and a delocalized
tail extending to the boundaries of the system. The amplitude of the tail is
small, but it decreases with $r$ slowly. Its contribution to the norm of the
wave function dominates for sufficiently large system sizes, $L\gg L_c(E)$;
such states behave as delocalized ones. In contrast, in small systems, $L\ll
L_c(E)$, quasi-localized states are overwhelmingly dominated by the localized
cores and are effectively localized. | V. Temkin, A. S. Ioselevich | 2023-09-12T16:06:00Z | http://arxiv.org/abs/2309.06345v3 | # Coexistence of localized and extended states in the Anderson model with long-range hopping
###### Abstract
We study states arising from fluctuations in the disorder potential in systems with long-range hopping. Here, contrary to systems with short-range hopping, the optimal fluctuations of disorder responsible for the formation of the states in the gap, are not rendered shallow and long-range when \(E\) approaches the band edge (\(E\to 0\)). Instead, they remain deep and short-range. The corresponding electronic wave functions also remain short-range-localized for all \(E<0\). This behavior has striking implications for the structure of the wave functions slightly above \(E=0\). By a study of finite systems, we demonstrate that the wave functions \(\Psi_{E}\) transform from a localized to a quasi-localized type upon crossing the \(E=0\) level, forming resonances embedded in the \(E>0\) continuum. The quasi-localized \(\Psi_{E>0}\) consists of a short-range core that is essentially the same as \(\Psi_{E=0}\) and a delocalized tail extending to the boundaries of the system. The amplitude of the tail is small, but it decreases with \(r\) slowly. Its contribution to the norm of the wave function dominates for sufficiently large system sizes, \(L\gg L_{c}(E)\); such states behave as delocalized ones. In contrast, in small systems, \(L\ll L_{c}(E)\), quasi-localized states have localized cores and are effectively localized.
## I Introduction
The theoretical and numerical study of eigenfunctions for the quantum-mechanical problem with deterministic power-law hopping
\[\hat{H}_{\rm hop}=\sum_{{\bf 3}\!{\bf j}^{\prime}}\varepsilon_{{\bf j}-{\bf j }^{\prime}}a^{\dagger}_{{\bf j}}a_{{\bf j}^{\prime}},\quad\varepsilon_{{\bf r }}\propto r^{-\beta}, \tag{1}\]
and local disorder
\[\hat{H}_{\rm dis}=\sum_{{\bf j}}V_{{\bf j}}a^{\dagger}_{{\bf j}}a_{{\bf j}}, \tag{2}\]
which is a modification of the Anderson impurity model [1], first started more than 30 years ago [2] (or see [3; 4] which are closely related) and has attracted significant interest in the community [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Today, the demand for proper theoretical analysis is great because of the growing number of experimentally accessible physical systems that are described by the same mathematical framework. For example, it can be used to describe quantum superconductor-metal transition in 2D disordered metals [22] or the behavior of arrays of trapped ions [23; 24], which is of great interest in quantum computing (for more examples, see [19]).
In this study, we consider the case when the value of the exponent \(\beta\) in (1) lies in an interval \(D<\beta<3D/2\), where \(D\) is the dimension of the considered lattice (we provide results for any dimension, but our numerical study of the optimal fluctuation, see Section VII, is limited to physical dimensions \(D=1,2,3\) only). In the case that we explore, the effects of typical weak fluctuations of the random potential were studied extensively, and it was shown [18] that a non-Anderson disorder-driven metal-insulator transition takes place. Here, we aim to elaborate on the understanding of the effects of the interplay between typical weak fluctuations of the random potential and rare strong local fluctuations, (the latter are sometimes called "rare-regions"). Particularly, we explain numerical results from [25], which seem to indicate the possibility of the coexistence of localized and extended states near one of the edges of the band in the considered model. We expect the effects of strong local fluctuations to be the main mechanism for the formation of small-sized localized (or rather quasi-localized, see below) states on the background of extended ones. As will become clear later in this paper, no "true" coexistence is present in the investigated case, and Mott's principle [26] is not violated.
The localized band-gap states arising due to localized fluctuations in a standard Anderson model with nearest neighbor hopping and gaussian disorder in dimensions \(D\leq 3\) are well known - they form the so-called Lifshitz tail in the density of states \(\nu(E)\) within the energy gap (see [27]). For \(E\) being deep enough in the gap, \(\nu(E)\) is exponentially small
\[\nu(E)\propto\exp\{-S_{\rm Lif}(E)/W^{2}\},\quad S_{\rm Lif}(E)\propto|E|^{2-D/2} \tag{3}\]
where \(W^{2}=\langle V^{2}\rangle\). Here the energy \(E\) is accounted for with respect to the band edge. The optimal fluctuation of disorder, responsible for formation of the localized state with energy \(E\) has a spatial scale \(a(E)\) and depth \(U(E)\)
where
\[a(E)\propto|E|^{-1/2},\quad U(E)\sim|E|. \tag{4}\]
The optimal fluctuation approach (that is, technically, the steepest descent method for the functional integration over configurations of random potential) is justified if \(S_{\rm Lif}(E)/W^{2}\gg 1\)
Note that \(S_{\rm Lif}(E)\to 0\) and \(a(E)\to\infty\) as \(E\to 0\), so that the optimal fluctuation method is not applicable in the close vicinity if the band edge.
Generalization of the result (3) to the systems with the general hopping Hamiltonian (1) gives
\[S_{\rm Lif}\propto|E|^{2-D/\alpha},\quad\alpha\equiv\beta-D. \tag{5}\]
The result (5) is perfectly reasonable for \(2-D/\alpha>0\), and the estimates (4) apply to the optimal fluctuation in this case. The situation is changed cardinally for \(2-D/\alpha<0\): here the fluctuation with size \(a(E)\propto|E|^{-1/2}\) ceases to be an optimal one: the actual "non-Lifshitz" optimal fluctuation at \(2-D/\alpha<0\) has a microscopic spatial scale \(a_{0}\):
\[S_{\rm nonLif}(E)\approx S_{\rm nonLif}(0)+A|E|,\quad a(E)\sim a _{0}, \tag{6}\] \[S_{\rm nonLif}(0)\sim\varepsilon_{0}^{2},\quad A\sim 1/\varepsilon_{0} \tag{7}\]
where \(\varepsilon_{0}\) is some characteristic energy scale of order of the electronic bandwidth. The linear expansion (6) is valid for \(|E|\ll\varepsilon_{0}\)
It is important, that, in contrast with the long-range Lifshitz fluctuations, the short-range non-Lifshitz optimal fluctuations provide a valid description of the corresponding contribution to the density of states even at \(E\to 0\): the corresponding \(S_{\rm nonLif}(E)\) tends to a finite limit as \(E\to 0\). The latter observation was the origin for the idea about the existence of Lifshitz-like states not only for \(E<0\), but also for \(E>0\), at least in a certain range.
To reliably address the question of possible existence of localized electronic states on the continuum background of the delocalized band states, one is forced to consider finite systems. As we will see, the structure of both delocalized and quasi-localized states essentially depends on the system size. Namely, we show that upon crossing the band edge \(E=0\) the true localized states that existed for \(E<0\), continually transform into the quasi-localized ones. They consist of the localized parts (which are basically are the same as for \(E<0\)) and the delocalized ones with the amplitude that vanish continuously as \(E\) approaches \(0\) from above. The delocalized part is, however, extremely sensitive to the systems size \(L\) and becomes increasingly important with increasing \(L\). As a result, the quasi-localized states behave practically as localized ones for \(L<L_{c}(E)\equiv E^{-\frac{D+1}{\alpha}+2}\) while becoming essentially delocalized for \(L>L_{c}(E)\).
## II The problem statement.
We consider a FINITE \(D\)-dimensional hypercubic lattice of \((2L)^{D}\) sites (\(L\gg 1\)) with periodic boundary conditions. The hamiltonian
\[\hat{H}=\hat{H}_{\rm hop}+\hat{H}_{\rm dis}, \tag{8}\]
where the random potential \(V_{\bf j}\) obeys the gaussian distribution:
\[{\cal P}\{V\}=\prod_{\bf j}P(V_{\bf j})\propto e^{-\frac{S\{V\}} {W^{2}}},\quad S\{V\}=\frac{1}{2}\sum_{\bf j}V_{\bf j}^{2},\] \[P(V)=\frac{1}{\sqrt{2\pi}W}\exp\left\{-\frac{V^{2}}{2W^{2}} \right\}. \tag{9}\]
In the momentum representation
\[\hat{H}=\sum_{n}\varepsilon({\bf k}_{\bf n})a_{\bf n}^{\dagger}a_{\bf n}+\sum _{{\bf nn}^{\prime}}a_{\bf n}^{\dagger}a_{\bf n^{\prime}}V_{{\bf n}^{\prime}-{ \bf n}}, \tag{10}\]
where the momenta \({\bf k}_{\bf n}\equiv\pi{\bf n}/L\), and the corresponding normalized eigenfunctions
\[\phi_{\bf n}({\bf j})=(2L)^{-D/2}\exp(i\pi({\bf j}\cdot{\bf n})/ L), \tag{11}\] \[{\bf n}\equiv(n_{1},n_{2},\ldots n_{D}),\quad n_{i}=-L,-L+1, \ldots,L, \tag{12}\]
\[a_{\bf j}=\sum_{\bf n}a_{\bf n}\phi_{\bf n}({\bf j}),\quad a_{\bf j}^{ \dagger}=\sum_{\bf n}a_{\bf n}^{\dagger}\phi_{\bf n}^{*}({\bf j}), \tag{13}\]
The kinetic energy in \(k\)-representation:
\[\varepsilon({\bf k})=\sum_{{\bf j}^{\prime}}\varepsilon_{{\bf j }-{\bf j}^{\prime}}\phi_{{\bf n}+{\bf k}}({\bf j})\phi_{\bf n}({\bf j}^{ \prime})=\varepsilon_{0}f({\bf k}), \tag{14}\] \[{\bf k}=\frac{\pi{\bf n}}{L},\quad{\bf k}\equiv(k_{1},k_{2}, \ldots k_{D})\quad-\pi<k_{i}<\pi, \tag{15}\]
where all lengths are measured in the units of lattice spacing. The characteristic energy \(\varepsilon_{0}\) by the order of magnitude is an electronic bandwidth, in what follows we will measure all energies in the units of \(\varepsilon_{0}\). The \(2\pi\)-periodic function \(f(k)\)
\[f({\bf k})=\left|4\sum_{\mu=1}^{D}\sin^{2}k_{\mu}/2\right|^{ \alpha/2}, \tag{16}\] \[f_{\rm max}=f(\pi,\pi,\ldots,\pi)=(4D)^{\alpha/2},\quad\alpha= \beta-D \tag{17}\]
behaves at \(k\ll 1\) as
\[f({\bf k})\approx|k|^{\alpha},\quad|k|\equiv\left(\sum_{\mu=1}^{D}k_{\mu}^{2} \right)^{1/2}. \tag{18}\]
Thus, all the energies are confined within the interval \(0<\varepsilon_{n}<W_{\rm band}\), where \(W_{\rm band}=\varepsilon_{0}f_{\rm max}\).
## III Low energy properties of an ideal system
For small \(E\ll 1\) the spectrum \(\varepsilon({\bf k})\) is isotropic and the corresponding wave-functions can be characterized by the angular momenta. In our problem only the fully symmetric solutions are relevant, because the low-symmetric ones vanish at \(r\to 0\) and hardly feel the strongly localized potential \(V_{\bf j}\). The normalized fully-symmetric eigenfunctions are
\[\psi_{n}(r)=\sqrt{\frac{2k_{n}^{D-1}}{\sigma_{D}L}}f(k_{n}r),\;\int_{0}^{L} \sigma_{D}r^{D-1}dr|\psi_{n}(r)|^{2}=1,\]
\[f(x)=\sqrt{\frac{\pi}{2}}\frac{J_{D/2-1}(x)}{x^{D/2-1}},\quad\sigma_{D}=\frac{ D\pi^{D/2}}{\Gamma(D/2+1)}, \tag{19}\]
where \(r\equiv|{\bf r}|\), \(k\equiv|{\bf k}|\), \(k_{n}=\pi n/L\), \(\sigma_{D}\) is the surface area of the \(D\)-dimensional sphere with unit radius. The asymptotics of \(f(x)\) are
\[f(x\gg 1)\approx x^{-\frac{D-1}{2}}\cos(x+\varphi_{D}),\quad \varphi_{D}=\frac{\pi}{4}(1-D), \tag{20}\] \[f(0)=\frac{\sqrt{\pi/2}}{2^{D/2-1}\Gamma(D/2)}. \tag{21}\]
For general \(D\), the low energy (\(E\ll 1\)) density of states is
\[\nu_{0}^{(D)}(E)=\frac{\sigma_{D}}{(2\pi)^{D}}\frac{k^{D-1}dk}{dE} =\frac{\sigma_{D}/D}{(2\pi)^{D}}\frac{d(k^{D})}{dE}=\] \[\frac{\sigma_{D}/D}{(2\pi)^{D}}\frac{d(E^{D/\alpha})}{dE}=\frac{ \sigma_{D}}{(2\pi)^{D}\alpha}\frac{K^{D}}{E}. \tag{22}\]
We have introduced characteristic momentum
\[K=E^{1/\alpha}, \tag{23}\]
which, alongside with the short range scale \(K_{0}\sim\pi\), is an important momentum scale in our problem. Throughout this paper we assume that
\[1/L\ll K\ll 1 \tag{24}\]
The general level spacing, which takes into account all the states irrespective to their symmetry, is
\[\delta_{D}(E)=\left(\nu_{0}^{(D)}(E)L^{D}\right)^{-1}=\frac{(2\pi)^{D}\, \alpha}{\sigma_{D}}\frac{E}{(KL)^{D}} \tag{25}\]
Note that the dimensions of the density of states and of the level spacing are \([\nu]=(1/\mbox{volume})\times(1/\mbox{energy})\) and \([\delta(E)]=\mbox{energy}\).
In what follows we will also need the density of states and the level spacing with respect only to fully symmetric states. They coincide with \(\delta_{1}(E)\) and \(\nu_{1}(E)\), no matter what the real \(D\) is:
\[\nu_{1}(E)\approx\frac{1}{\pi\alpha}\frac{K}{E},\quad\delta_{1}(E)=[L\nu_{1}( E)]^{-1}=\pi\alpha\frac{E}{KL}. \tag{26}\]
Note that for small \(E\ll 1\) the density of states becomes very small and, therefore, the level spacing becomes relatively large.
## IV The localized states and the optimal fluctuations
For small disorder \(W\ll 1\) there is some (exponentially small) number of localized states with \(E<0\), associated with exponentially rare local fluctuations of the random potential. Let us look at the contribution of these localized states to the density of states. Following the standard procedure [27; 28] of finding an optimal fluctuation \(V_{\bf j}\) and the corresponding localized wave-function \(\Psi_{\bf n}\) we should minimize the functional
\[\tilde{S}(\{\Psi,V\},\lambda,\eta)=\frac{1}{2}\sum_{\bf j}V_{\bf j}^{2}- \lambda\left\{\sum_{\bf j}\Psi_{\bf j}^{*}\varepsilon_{\bf j-j^{\prime}}\Psi _{\bf j}+\sum_{\bf j}V_{\bf j}|\Psi_{\bf j}|^{2}-E\right\}-\eta\left\{\sum_{ \bf j}|\Psi_{\bf j}|^{2}-1\right\} \tag{27}\]
with respect to two functions \(\Psi_{\bf j}\), \(V_{\bf j}\) and two additional parameters \(\lambda\) and \(\eta\). Variation of (27) with respect to \(V_{\bf j}\) allows one to express \(V_{\bf j}\) through \(\Psi_{\bf j}\) and \(\lambda\):
\[V_{\bf j}=\lambda|\Psi_{\bf j}|^{2} \tag{28}\]
and we are left with the functional
\[-\frac{1}{\lambda}\tilde{S}(\{\Psi\},\lambda)=\] \[=\sum_{\bf j}\Psi_{\bf j}^{*}\varepsilon_{\bf j-j^{\prime}}\Psi_{ \bf j}+\frac{\lambda}{2}\sum_{\bf j}|\Psi_{\bf j}|^{4}-E\sum_{\bf j}|\Psi_{\bf j }|^{2} \tag{29}\]
subject to minimization with respect to \(\Psi_{\bf j}\) with the normalization constraint
\[\sum_{\bf j}|\Psi_{\bf j}|^{2}=1 \tag{30}\]
Thus, we arrive at the nonlinear Schrodinger equation
\[\sum_{\bf j^{\prime}}\varepsilon_{\bf j-j^{\prime}}\Psi_{\bf j^{\prime}}+\{ \lambda|\Psi_{\bf j}|^{2}-E\}\Psi_{\bf j}=0 \tag{31}\]
The function \(\Psi_{\bf j}\) should be localized, i.e., it should vanish for large \({}_{\bf j}\). The implications of this requirement we
will discuss in the Section VI.
Finally, we have to ensure that the normalization condition (30) is fulfilled. To satisfy this condition we have to choose the only free parameter at our disposal - \(\lambda\).
The explicit form of the wave function \(\Psi_{\bf j}^{\rm(opt)}\) and optimal parameter \(\lambda_{\rm opt}\) can only be found by means of numerical solution of the essentially discrete nonlinear Schrodinger equation (31). The final expression for the optimal exponent in (9) reads
\[\frac{S_{\rm opt}}{W^{2}}=\frac{\sum_{\bf j}\left(V_{\bf j}^{\rm(opt)}\right)^{2 }}{2W^{2}}=\frac{\lambda_{\rm opt}^{2}}{2W^{2}}\sum_{\bf j}\left|\Psi_{\bf j}^{ \rm(opt)}\right|^{4} \tag{32}\]
We are interested in the behavior of \(S_{\rm opt}(E)\) for small energies \(|E|\ll 1\), so that \(S_{\rm opt}(E)\) can be expanded in \(E\) up to linear terms:
\[S_{\rm opt}(E)\approx S_{\rm opt}(0)+\lambda_{\rm opt}(0)E. \tag{33}\]
Note that \(S_{\rm opt}(0)\) and \(\lambda_{\rm opt}(0)\) are some numerical constants of order unity, depending on \(D\), \(\alpha\) and on the type of lattice.
## V The local character of the optimal fluctuation
The equations (29), (31) are perfectly standard - they do not differ from what we have for the conventional Lifshits tails, arising in the case of \(\alpha>D/2\). Then why do we expect an anomalous behavior of the tails in our case \(\alpha<D/2\)?
Let us model an optimal fluctuation as a square potential well with depth \(U\) and width \(a\), so that we have to minimize the function of two variables
\[S(U,a)\sim U^{2}a^{d} \tag{34}\]
To have a level with energy \(E\) this well should obey the following constraints
1. The well should be deeper than \(E\): \((U>|E|)\)
2. The well should be wider than the wave-length: \(a>Q^{-1}=U^{-1/\alpha}\).
It seems plausible that the narrowest possible well is a good choice. Then, assuming \(a\sim a_{\rm min}=Q^{-1}\) we have to minimize the function
\[S(U)\sim U^{2-D/\alpha} \tag{35}\]
If \(\alpha>D/2\) (as it is for conventional Lifshits tails with \(\alpha=2\) and \(D<4\)) then \(S\) decreases with decreasing \(U\), so that the optimal fluctuation corresponds to minimal possible \(U_{\rm min}\sim|E|\) which leads to the standard Lifshits result:
\[S_{\rm Lif}^{\rm(opt)}\propto|E|^{2-D/\alpha} \tag{36}\]
In our case \(\alpha<D/2\) and \(S\) decreases with decreasing \(U\), so the minimum of \(S\) corresponds to the deepest possible fluctuation. Thus, within the continual approximation, the optimal fluctuation would be infinitely deep and infinitely narrow. In reality, however, the fluctuation should contain at least one site, so the minimum is attained at \(a\sim 1\), \(U\sim\max\{|E|,t\}\). As a result, we obtain
\[S_{\rm nonLif}^{\rm(opt)}\sim 1 \tag{37}\]
### The Flat Band Approximation (FBA)
For very small \(\alpha\ll 1\) the electrons are almost dispersionless in the main part of the Brillouin Zone
\[\varepsilon({\bf k})\approx W_{\rm band},\quad E_{\rm loc}^{(0)}\approx W_{\rm band}. \tag{38}\]
The dispersion is only present in the domain of exponentially small \(k\sim e^{-1/\alpha}\). In the leading approximation both the optimal potential
\[V_{\bf j}^{\rm(opt)}=(-W_{\rm band}+E)\delta_{\bf j,0}, \tag{39}\]
and the corresponding wave-function
\[\Psi_{\bf j}^{\rm(opt)}=\delta_{\bf j,0} \tag{40}\]
are perfectly localized at the same site.
\[S^{\rm(opt)}(E)=\frac{1}{2}(W_{\rm band}-E)^{2} \tag{41}\]
### The Single Site Approximation (SSA)
If \(\alpha\) is not specifically small, the FBA does not work: the wave function is not localized on one site, so that formula (40) is not valid. However, as we conclude from numerics (se Section VII), the potential \(V_{\bf j}^{\rm(opt)}\) remains extremely short range even for \(\alpha\) away from zero: the potential remains localized at a single site with an accuracy better than 1%! Thus, it is very interesting to explore the single-site approximation (SSA) that postulates
\[V_{\bf j}^{\rm(opt)}=V_{0}(E)\delta_{\bf j,0},\quad S^{\rm(opt)}(E)=V_{0}^{2} (E)/2, \tag{42}\]
where the dependence \(V_{0}(E)\) is yet to be found. We stress again that, strictly speaking, the formula (42) is incorrect. Namely, it is inconsistent with the requirement (28) which relates the shape of optimal potential to that of the optimal wave-function. Nevertheless, as is demonstrated in the Section VII, SSA works extremely well, as long as we are interested in the "integral" characteristics, governed by the core of the fluctuation. What is also important, SSA allows for the analytical solution of
the arising quantum-mechanical problem. In particular, in [25] it was shown that, within SSA
\[V_{0}(E)=\left\{\fint_{BZ}\frac{d^{D}{\bf k}}{(2\pi)^{D}}\frac{1}{E-\varepsilon_{ \bf k}}\right\}^{-1} \tag{43}\]
However, we choose to postpone using the SSA, because there are many important and nice results that can be derived without appealing to this approximation.
## VI Localized vs delocalized wave-functions: general consideration
As long as we consider systems of finite size, the optimal fluctuation method is perfectly applicable not only to genuine localized states with \(E<0\), but to all the states, including those with \(E>0\).
In the Section IV we have studied only the electronic ground state in the presence of the optimal fluctuation, here we will discuss the entire spectrum of the states. We will see that, besides the standard fully delocalized states with positive energies (plane waves), there is a lot of hybrid states - partly localized and partly delocalized.
Suppose that we have found the form of optimal fluctuation \(V_{\bf j}^{(\rm opt)}\). To find the entire set of the states \(\psi_{\bf j}^{(m)}\) and the corresponding energies \(E_{m}\), we have to solve the linear Schrodinger equation
\[\sum_{{\bf j}^{\prime}}\varepsilon_{{\bf j}-{\bf j}^{\prime}}\psi_{{\bf j}^{ \prime}}^{(m)}+\{V_{\bf j}^{(\rm opt)}-E_{m}\}\psi_{\bf j}^{(m)}=0, \tag{44}\]
to apply periodic boundary condition to wave-functions \(\psi_{\bf j}^{(m)}\), and obtain a discrete set of eigenenergies \(E_{m}\) and the corresponding eigenfunctions \(\psi_{m}({\bf j})\). Clearly, \(\Psi_{\bf j}^{(\rm opt)}\) will be one of these states (the ground state with energy \(E_{0}\)). A formal solution of (44) may be written as
\[\psi_{\bf j}=\sum_{{\bf j}^{\prime}}g_{E}({\bf j}-{\bf j}^{\prime})\psi_{{\bf j }^{\prime}}V_{{\bf j}^{\prime}}^{(\rm opt)}, \tag{45}\]
where
\[g_{E}({\bf j},{\bf j}^{\prime})=g_{E}({\bf r})=\sum_{\bf n}\frac{\exp[i({\bf k }_{\bf n}\cdot{\bf r})]}{E-\varepsilon_{\bf n}},\quad{\bf r}\equiv{\bf j}-{ \bf j}^{\prime}. \tag{46}\]
is the Green function of the free Schrodinger equation. Note that there is no free term in the solution (45) since we have assumed that the energy \(E\) is out of resonance with all the eigenfrequencies of the free Schrodinger equation: \(E\neq\varepsilon_{n}\) for all \(n\). Writing \(\psi_{n}({\bf j})\) in terms of the Green function (46), which uses the basis (12), ensures that the boundary conditions for the wave function are fulfilled automatically.
The sum over \({\bf j}^{\prime}\) in (45) is dominated by small \(|{\bf j}^{\prime}|\sim 1\), because we have assumed that \(V_{{\bf j}^{\prime}}^{(\rm opt)}\) is localized: it rapidly decays with \(|{\bf j}^{\prime}|\). Therefore, for \({\bf j}\gg 1\) we get
\[\psi_{\bf j}=A(E)g_{E}({\bf j}) \tag{47}\]
where \(A(E)\) is certain \({\bf j}\)-independent coefficient. Thus, the asymptotics of the wave function feels the presence of the optimal fluctuation only through the value of the energy \(E\). There are two different cases that we will discuss: negative energies \(E<0\) and positive energies \(E>0\).
### Negative energies: localized wave-function
When the energy of the state is negative, the Green function can be approximated by the integral instead of the discrete sum
\[g_{E}^{(\rm loc)}=\int_{\rm BZ}\frac{d^{D}{\bf k}}{(2\pi)^{D}}\frac{e^{i({\bf k }\cdot{\bf r})}}{E-\varepsilon({\bf k})}, \tag{48}\]
since it converges at \(k\) in the entire Brillouin zone. The large-\(r\) asymptotic behavior of \(g_{E}^{(\rm loc)}({\bf r})\) can be easily evaluated. At smallest distances \(|{\bf r}|\lesssim r_{0}\sim 1\) the components with high momenta \(k\sim\pi\) give principal contribution to (48), \(E\) in the denominator can be neglected compared to \(\varepsilon({\bf k})\) and we get \(g\sim 1\) in this range of distances. However, \(E\) in the denominator still can be neglected in a wider range, namely, for \(r\lesssim r_{1}\) where
\[r_{1}(E)\sim 1/K\sim E^{-1/\alpha}\gg 1 \tag{49}\]
In this range of distances (\(r_{0}\ll r\ll r_{1}\)) we have:
\[g_{E}^{(\rm loc)}({\bf r})\approx-\int_{BZ}\frac{d^{D}{\bf k}}{( 2\pi)^{D}}\frac{e^{i({\bf k}\cdot{\bf r})}}{\varepsilon({\bf k})}\approx\] \[\approx-\frac{1}{r^{D-\alpha}}\int\frac{d^{D}{\bf q}}{(2\pi)^{D}} \frac{e^{i({\bf q}\cdot{\bf m})}}{\varepsilon(q)}\propto\frac{1}{r^{D-\alpha}}, \tag{50}\]
where we have introduced \({\bf m}\equiv{\bf r}/r\) and \({\bf q}\equiv{\bf k}r\).
The main contribution to the integral (50) here comes from \(q\sim 1\), or, from relatively small \(k\sim 1/r\).
For \(r\gg r_{1}(E)\) we can expand the integrand in \(\varepsilon(k)\) and get
\[g_{E}^{(\rm loc)}({\bf r})\approx\frac{1}{E^{2}}\int_{\rm BZ} \frac{d^{D}{\bf k}}{(2\pi)^{D}}e^{i({\bf k}\cdot{\bf r})}\varepsilon(k)=\] \[=\frac{1}{E^{2}r^{D+\alpha}}\int\frac{d^{D}{\bf q}}{(2\pi)^{D}}e^{ i({\bf q}\cdot{\bf m})}\varepsilon(q)\propto\frac{1}{r^{D+\alpha}}, \tag{51}\]
It is easy to see that the results (50) and (51) match at \(r\sim r_{1}\).
Thus
\[g_{E}^{(\rm loc)}({\bf r})\sim\begin{cases}\hskip 28.452756pt1,&r\lesssim r_{0}, \\ \hskip 28.452756ptr^{\alpha-D},&r_{0}\ll r\ll r_{1},\\ r_{1}^{2\alpha}(E)r^{-\alpha-D},&r\gg r_{1},\end{cases} \tag{52}\]
and
\[\psi_{\rm opt}^{(\rm loc)}({\bf r})=\frac{1}{c}g_{E}^{(\rm loc)}({\bf r}) \tag{53}\]
where \(c\sim 1\) is the normalization constant. Note that the main contribution to the normalization integral (and, therefore, to \(c\)) comes from the range \(r\sim 1\), so that \(c\) is almost \(E\)-independent. It should be mentioned that the asymptotic formula (47) and, hence, the formula (53) either, does not apply at \(r\sim 1\). So, to evaluate \(c\), one, in principle, has to use an explicit numerical solution of the initial discrete problem. In general, the localized part is not strongly sensitive to \(E\), so, for \(E\ll 1\),
\[\Psi_{\rm opt}^{\rm(loc)}({\bf r})\approx\frac{1}{c}g_{E=0}^{\rm(loc)}({\bf r}) \tag{54}\]
### Positive energies: quasi-localized wave function
The vast majority of the eigenstates \(\psi_{\bf j}^{(m)}\) are not much affected by the presence of the optimal fluctuation, so that the corresponding eigenfunctions and eigenenergies are described by (12) and (15)
\[\psi_{\bf j}^{(m)}\approx\phi_{\bf n}({\bf j}),\quad E_{n}\approx\varepsilon_{ \bf n}, \tag{55}\]
These states are perfectly delocalized. There is, however a subset of states, much more sensitive to the potential (28) - the states, fully symmetric with respect to rotations around the center of optimal fluctuation. Note, that the local level spacing within this subset is \(\delta_{1}(E)\propto L^{-1}\) (see (26)), which, for \(D>1\), is much larger than the total level spacing \(\delta_{D}\).
Still, as we will see soon, even these fully symmetric states are strongly delocalized, except for a bunch of \(\sim M(E_{0})\) states in a narrow interval of energies \(|E-E_{0}|\lesssim\Delta(E_{0})\) around \(E_{0}\), where the states can be effectively localized.
Under which condition the wave-function with positive energy is effectively localized? To answer this question let us introduce an important characteristic
\[\epsilon(E)\equiv\frac{E-\varepsilon_{\rm mid}}{\delta_{1}(E)},\quad \varepsilon_{\rm mid}(E)\equiv\frac{\varepsilon_{\rm right}+\varepsilon_{\rm left}} {2} \tag{56}\]
where \(\varepsilon_{\rm left}\) is the closest neighbour of \(E\) from the left, and \(\varepsilon_{\rm right}\) - from the right in the string of eigenenergies \(\varepsilon_{n}\), corresponding to free fully symmetric states (see Fig. 1). The local level spacing is \(\delta_{1}(E)=\varepsilon_{\rm right}-\varepsilon_{\rm left}\).
Suppose that the energy \(E\) is placed in the middle of the interval \((\varepsilon_{\rm left},\varepsilon_{\rm right})\), or, in other words \(E=E_{\rm mid}(E)\) and \(\epsilon(E)=0\). Then, obviously, for \(r\ll r_{1}\) the terms in the sum (46) with \(\varepsilon_{n}<E\) and with \(\varepsilon_{n}>E\) will cancel each other in pairs exactly in a way, prescribed by the principal value integration. Hence, for \(r\ll r_{1}\) and \(\epsilon(E)=0\) the Green function is given by the following integral
\[g_{E}^{(\epsilon=0)}(r\ll r_{1})=\!\!-\!\!\int_{\rm BZ}\frac{d^{D}{\bf k}}{(2 \pi)^{D}}\frac{e^{i({\bf k}\cdot{\bf r})}}{E-\varepsilon({\bf k})}, \tag{57}\]
which is evaluated in exactly the same manner as before
\[g_{E}^{(\epsilon=0)}({\bf r})\sim\begin{cases}\phantom{-}1,&r\lesssim r_{0}, \\ r^{\alpha-D},&r_{0}\ll r\ll r_{1}\end{cases} \tag{58}\]
Evaluation of the very far tails \(r\gg r_{1}\) is not so straightforward. Indeed, since \(Kr\gg 1\) one needs to account for the discreteness of the system even when \(\epsilon(E)=0\). Explicitly, the Green function reads
\[g_{E}(|{\bf r}|\gg r_{1})\propto\sum_{\bf n}\frac{e^{i{\bf k}_{\bf n}{\bf r}} }{E_{\rm mid}-\varepsilon_{\bf n}}. \tag{59}\]
The main contribution to this sum comes from \(|{\bf k}_{\bf n}|\approx K\), hence, we expand \(\varepsilon_{\bf k_{\bf n}}\) in the vicinity of \(E\). Let us introduce integer \(l\) in the following way
\[n=n_{\rm left}(E_{\rm mid})+l, k_{n}=K(E_{\rm mid})+\frac{\pi}{L}(l-1/2), \tag{60}\] \[\varepsilon_{n}=E_{\rm mid}+(l-1/2)\delta_{1}(E). \tag{61}\]
Since the spectrum is spherically symmetric, we need the asymptotic of the spherical wave
\[f_{n}(r)\approx x^{-(D-1)/2}\cos(x+\varphi_{D}), \tag{62}\] \[x\equiv(K(E_{m})r-(\pi r/L)[\epsilon-(l-1/2)])\gg 1. \tag{63}\]
Therefore, relation (59) reads
\[g_{E}(|{\bf r}|\gg r_{1})\propto\] \[\propto{\rm Re}\left[-e^{iKr-i\frac{\pi r}{2L}}(Kr)^{-(D-1)/2} \sum_{l=-\infty}^{\infty}\frac{e^{i\frac{\pi r}{2}l}}{l-\frac{1}{2}}\right], \tag{64}\]
since it converges at small \(l\)'s. Now, we use
\[\sum_{l=-\infty}^{\infty}\frac{e^{i\pi lz}}{l-1/2}=-i\pi e^{i\frac{\pi}{2}z}, \tag{65}\]
and obtain the final expression for the wave function with positive energy in the middle of the interval \(E=E_{\rm mid}\)
\[\Psi_{E}({\bf r})\sim\begin{cases}\phantom{-}1,&r\lesssim r_{0},\\ \phantom{-}r^{\alpha-D},&r_{0}\ll r\ll r_{1},\\ \phantom{-}r_{1}^{\alpha-D}\frac{\sin{(Kr+\varphi_{D})}}{(Kr)^{\frac{D-1}{2}}},&r\gg r_{1}.\end{cases} \tag{66}\]
The oscillating tail at \(r\gg r_{1}\) prevents \(\Psi_{E}({\bf r})\) from being truly localized: even when \(\epsilon=0\) the wave function has delocalized tails. We call this state quasi-localized.
Effective localization condition
We consider finite systems of size \((2L)^{D}\), hence, it is possible for the quasi-localized state to be effectively localized in the vicinity of the optimal fluctuation. Indeed, one can compute the norm of \(\Psi_{E}(\mathbf{r})\)
\[\int d^{D}\mathbf{r}|\Psi(\mathbf{r})|^{2}\sim\left(1+r_{1}^{2(\alpha-D)}K^{-D} \int_{1}^{Kr}dy\sin^{2}y\right)\sim\left(1+E^{-\frac{2}{\alpha}(\alpha-D)- \frac{D}{\alpha}+\frac{1}{\alpha}}L\right)\sim\left(1+E^{-2+\frac{D+1}{\alpha} }L\right). \tag{67}\]
The contribution from the oscillating tail vanishes when the energy is sufficiently low. Let us introduce \(L_{c}(E)\) in the following way
\[L_{c}\equiv E^{-\frac{D+1-2\alpha}{\alpha}}. \tag{68}\]
Tail contribution vanishes if
\[L\ll L_{c}. \tag{69}\]
Our calculations are valid only if we consider higly excited state, i.e. \(KL\gg 1\), as given by condition (24). Conditions (24) and (69) can be satisfied simultaneously in very large systems only if \(\alpha<D/2\).
Quasi-localized states, that we have just introduced, exist due to the presence of strong local fluctuations which correspond to the saddle-point solution. Since typical fluctuations are always present in real systems, one needs to take them into account. Let us demonstrate for the most simple case \(D=1\) that the quasi-localized states are robust to these fluctuations. As we will show later (see Section IX), the level spacing is not very sensitive to the presence of the potential fluctuations, hence, we can assume it to coincide with the one in clean system. Using that, we can easily find energy that corresponds to the level spacing \(\delta_{D}(E)\) which is of order of the characteristic scale of the matrix element of the random potential \(\sqrt{\langle V^{2}\rangle}\sim WL^{-1/2}\):
\[E_{c}^{\prime}\sim W^{-\frac{\alpha}{1-\alpha}}L^{-\frac{\alpha}{2(1-\alpha)}}. \tag{70}\]
Therefore, states with energies \(E\ll E_{c}^{\prime}\) remain almost unperturbed owing to typical fluctuations. Some of them are "extended" over the whole system: the localization length \(l_{E}\sim W^{-2}E^{2-\frac{2}{\alpha}}\)[25] for these energies is much larger than the system size; and some of them are quasi-localized in the sense described above, since \(E_{c}\ll E_{c}^{\prime}\), where
\[E_{c}\sim L^{-\frac{\alpha}{D+1-2\alpha}}. \tag{71}\]
## VII Numerical study of the optimal fluctuation
We perform a numerical study of the optimal fluctuation dropping contribution from the delocalized tails.
Indeed, because we know that any state with positive energy is either extended or quasi-localized, one cannot fine-tune the energy to remove the oscillating non-decaying contribution.
Our results support strongly localized character of the core of the optimal fluctuation. At small \(|\mathbf{j}|\) the potential \(V_{\mathbf{j}}^{(\mathrm{opt})}\) rapidly decays with \(|\mathbf{j}|\). For example, in \(1D\)-case, \(V_{\pm 1}^{(\mathrm{opt})}/V_{0}^{(\mathrm{opt})}\) varies from \(0.01\) at \(\alpha=0.15\) to \(0.04\) at \(\alpha=0.35\) (see Fig. 4).
At the same time at \(|\mathbf{j}|\gg 1\) the decay of \(V_{\mathbf{j}}^{(\mathrm{opt})}\) becomes rather slow and is well described by a power law:
\[V_{\mathbf{r}}^{(\mathrm{opt})}\propto|\Psi_{\mathbf{r}}^{(\mathrm{opt})}|^{2 }\propto|\mathbf{r}|^{2\alpha-2D} \tag{72}\]
Figure 2: Upper string: a locally equidistant spectrum of levels \(E_{n}\equiv\varepsilon_{n}\) in the absence of the fluctuation. Lower string: a spectrum in the presence of fluctuation with (a) \(L\ll L_{c}\), (b) \(L\gg L_{c}\).. The levels \(E_{n}\) within the localization energy domain are shifted with respect to \(\varepsilon_{n}\). In general, they contain both localized and delocalized components.
which is perfectly consistent with the exact relations (52) and (28). Although the validity of the latter relation signals about the validity of our numerics, it should be admitted that for the vast majority of questions which we address in this study, the tails of the potential \(V_{\mathbf{j}}^{(\mathrm{opt})}\) are irrelevant.
To test the accuracy of the SSP we have found \(S_{\mathrm{opt}}^{(P)}\) for series of truncated models where all \(V_{\mathbf{j}}^{(\mathrm{opt})}\) were forcefully set to be zeroes for \(|j|>P\), while the remaining \(2DP+1\) potentials were chosen to optimize \(S\). Particularly, due to (28) we set
\[V_{|\mathbf{j}|\leq P}^{(\mathrm{opt})}=\lambda|\psi_{\mathbf{j}}|^{2},\quad V _{|\mathbf{j}|>P}^{(\mathrm{opt})}=0. \tag{73}\]
After that, we add normalization condition \(\sum_{j}|\psi_{j}|^{2}=1\) and solve the system of \(P+2\) equations (instead of \(2DP+1\) since the localized state possesses discrete rotational symmetry). During the calculations, \(g_{E}^{(\mathrm{loc})}\) is used instead of \(g_{E}\) since we are interested in the localized solution. The results of exact optimization are illustrated in Fig. 3 and Fig. 5.
## VIII Away from \(E_{0}\): partly localized wave-functions
In Section VI we have found the condition for the state of energy \(E=E_{\mathrm{mid}}\) to be effectively localized and have studied the properties of the quasi-localized wave functions in detail. Now we discuss the properties of the states with energies \(E_{m}\neq E_{0}\). Slightly away from \(E_{0}\) we expect the \(\epsilon\neq 0\) part to be small:
\[\psi_{E_{m}}^{(\mathrm{m,del})}(\mathbf{r})\propto g_{E_{m}}^{(\epsilon\neq 0 )}(\mathbf{r})\propto[E_{m}-E_{\mathrm{mid}}(E_{m})]. \tag{74}\]
and, since \(E-E_{\mathrm{mid}}(E_{m})=0\) at \(E_{m}=E_{0}\), at small \(E_{m}-E_{0}\) we will have have \(E-E_{\mathrm{mid}}(E_{m})\propto E_{m}-E_{0}\).
### The delocalized part of the wave-function
Let us again introduce integer \(l\), such that
\[n=n_{\mathrm{left}}(E_{m})+l,\quad\varepsilon_{n}=E_{\mathrm{mid }}(E_{m})+(l-1/2)\delta_{1},\quad k_{n}=K(E_{m})-\frac{\pi}{L}[\epsilon-(l-1/2)], \tag{75}\] \[E_{m}-\varepsilon_{n}=(E_{m}-E_{\mathrm{mid}}(E_{m}))-(l-1/2) \delta_{1},\quad E_{\mathrm{mid}}(E_{m})-\varepsilon_{n}=-(l-1/2)\delta_{1},\] (76) \[f_{n}(r)\approx x^{-(D-1)/2}\cos(x+\varphi_{D}),\quad x\equiv(K( E_{m})r-(\pi r/L)[\epsilon-(l-1/2)])\gg 1, \tag{77}\]
Figure 3: The dependence \(S^{(\mathrm{opt})}(E_{0})\) for \(D=1\) obtained numerically. Solid line (red online) shows the result of FBA.
Then for the asymptotics of the \(\epsilon\neq 0\) part of the Green function we can write
\[g_{E_{m}}^{(\epsilon\neq 0)}({\bf r})\approx\sum_{l=-\infty}^{ \infty}\phi_{n_{\rm left}(E_{m})+l}(r)\phi_{m_{\rm left}(E_{m})+l}^{*}(0)\left\{ \frac{1}{[E_{m}-E_{\rm mid}(E_{m})]-\delta_{1}(l-1/2)}-\frac{1}{-\delta_{1}(l- 1/2)}\right\}\approx\\ \approx\frac{2K^{D-1}}{\sigma_{D}L}f(0)(Kr)^{-(D-1)/2}\sum_{l=- \infty}^{\infty}\left\{\frac{1}{[E_{m}-E_{\rm mid}(E_{m})]-\delta_{1}(l-1/2)} -\frac{1}{-\delta_{1}(l-1/2)}\right\}\times\\ \times\cos\{(Kr-(\pi r/L)[\epsilon-(l-1/2)])+\varphi_{D}\}\approx\\ \approx\frac{2K^{D-1}}{\sigma_{D}L}f(0)(Kr)^{-(D-1)/2}\frac{1}{ \delta_{1}}{\rm Re}\,\left\{\sum_{l=-\infty}^{\infty}\frac{\epsilon\exp\{ikr-i( \pi r/L)[\epsilon-(l-1/2)]+i\varphi_{D}\}}{[\epsilon-(l-1/2)](l-1/2)}\right\}= \\ =\frac{2K^{D-1}}{\sigma_{D}L}f(0)(Kr)^{-(D-1)/2}\frac{1}{\delta_{1 }}{\rm Re}\,\left\{\exp(iKr+i\varphi_{D})\sum_{l=-\infty}^{\infty}\frac{ \epsilon\exp\{-i(\pi r/L)[\epsilon-(l-1/2)]\}}{[\epsilon-(l-1/2)](l-1/2)} \right\}=\\ =\frac{2f(0)}{\sigma_{D}}K^{D-1}\nu_{1}(E)(Kr)^{-(D-1)/2}{\rm Re }\,\left\{\exp(iKr+i\varphi_{D})\Phi(r/L,\epsilon)\right\} \tag{78}\]
Figure 4: (a) The shape of optimal fluctuation in \(1D\). At first coordination sphere it drops already by two orders of magnitude, while in the tail it decreases only slowly (see inset). (b) The relative accuracy \(\varkappa(P)=[S^{(P)}-S^{(\infty)}]/S^{(\infty)}\) of truncated models with \(P\) shells of nonzero potentials.
where
\[\Phi(z,\epsilon)=\sum_{l=-\infty}^{\infty}\frac{\epsilon e^{-i\pi z[\epsilon-(l-1/2 )]}}{(\epsilon-(l-1/2))(l-1/2)}\approx\begin{cases}\begin{array}{ll}-\pi\tan( \pi\epsilon)&\text{for }z\ll 1,\;\text{any }\epsilon,\\ -\pi^{2}(1-|z|)\epsilon&\text{for }\epsilon\ll 1,\;\text{any }z,\\ 1/(\epsilon\mp 1/2)&\text{for }\epsilon\to\pm 1/2,\;\text{any }z,\end{array}\end{cases} \tag{79}\]
The corresponding contribution to the wave function \(\psi_{\text{deloc}}(\mathbf{r})\propto g^{(\epsilon\neq 0)}(\mathbf{r})\) is delocalized. Having in mind that the preexponential coefficient in (78) is \(L\)-independent, we conclude that the normalization integral \(N_{\text{deloc}}=\int|\psi_{\text{deloc}}(r)|^{2}r^{D-1}dr\propto L\). Thus, in the case of general \(\epsilon\sim 1\), when also \(\Phi(z,\epsilon)\sim 1\), the norm \(N_{\text{deloc}}\sim L\) strongly dominates over the norm of the localized part \(N_{\text{loc}}\sim 1\).
Since we are interested in such wave functions, that are at least partly localized (i.e., \(N_{\text{loc}}\gtrsim N_{\text{deloc}}\)), we have to concentrate on the case \(\epsilon\ll 1\), when \(\Phi(z,\epsilon)\ll 1\). Therefore we are allowed to use the corresponding asymptotics of (79). As a result
\[g_{E}^{(\epsilon\neq 0)}(\mathbf{r})\approx-\frac{2\pi^{2}f(0)}{ \sigma_{D}}K^{D-1}\nu_{1}(E)(Kr)^{-(D-1)/2}\epsilon(1-r/L)\cos(Kr+\varphi_{D}) =C\frac{\epsilon\sqrt{K^{D+1}L}}{E}\tilde{\phi}_{\text{deloc}}(r), \tag{80}\] \[C=-\frac{\pi f(0)}{\alpha}\sqrt{\frac{2}{3\sigma_{D}}}=-\frac{2 \frac{2-D}{2}\pi}{\alpha\Gamma(D/2)}\sqrt{\frac{\pi^{\frac{2-D}{2}}\Gamma(D/2 +1)}{3D}} \tag{81}\]
where the normalized delocalized wave function \(\tilde{\phi}_{\text{deloc}}(r)\) has, for \(|\epsilon|\ll 1\), the following asymptotics at \(Kr\gg 1\):
\[\tilde{\phi}_{\text{deloc}}(r)\approx\sqrt{\frac{6K^{D-1}}{\sigma_{D}L}}(Kr)^ {-\frac{D-1}{2}}(1-r/L)\cos(Kr+\varphi_{D}), \tag{82}\]
From (80) it is clear that the wave function becomes essentially delocalized already at
\[\epsilon\gtrsim\sqrt{L_{c}(E)/L},\quad\text{where }L_{c}(E)\sim E^{2-2(D+1)/ \alpha}\gg 1. \tag{83}\]
When \(\epsilon\) further increases and, finally, reaches \(|\epsilon|\sim 1\) the shape of the localized wave function starts to change and gradually approaches the standard cosine form (see Fig. 6).
It is now necessary to find the proper expression of \(\epsilon\) as a function of the energy \(E\).
## IX Eigenenergies
Until now we didn't need the exact form of the optimal fluctuation and considered it to be short-range only. It is impossible to find the spectrum in the presence of the optimal fluctuation given by the solution of the nonlinear Shrodinger equation (31) analytically. Hence, it is now
Figure 5: The dependence \(S^{(\text{opt})}(E_{0})\) for (a) \(D=2\), (b) \(D=3\). Solid line (red online) shows the result of FBA.
when we use SSA explicitly.
### The Dyson equation and its general solution
The Dyson equation for the Green function \(G_{E}(\mathbf{r},\mathbf{r}^{\prime})\) reads
\[G_{E}(\mathbf{r},\mathbf{r}^{\prime})=g_{E}(\mathbf{r},\mathbf{r}^{\prime})+Vg_{ E}(\mathbf{r},\mathbf{0})G_{E}(\mathbf{0},\mathbf{r}^{\prime}) \tag{84}\]
Then, for \(G_{E}(\mathbf{r},\mathbf{r}^{\prime})\) we obtain
\[G_{E}(\mathbf{r},\mathbf{r}^{\prime})=g_{E}(\mathbf{r},\mathbf{ r}^{\prime})+\frac{Vg_{E}(\mathbf{r},\mathbf{0})g_{E}(\mathbf{0},\mathbf{r}^{ \prime})}{1-g_{E}(\mathbf{0},\mathbf{0})V}=\\ =g_{E}(\mathbf{r}-\mathbf{r}^{\prime},\mathbf{0})+\frac{Vg_{E}( \mathbf{r},\mathbf{0})g_{E}(\mathbf{0},\mathbf{r}^{\prime})}{1-g_{E}(\mathbf{ 0},\mathbf{0})V}. \tag{85}\]
The eigenenergies \(E_{m}\) of the corresponding Schrodinger equation can be found as solutions of equations
\[g_{E}^{-1}(\mathbf{0},\mathbf{0})-V=0, \tag{86}\]
with respect to \(E\). As earlier, we split the Green function into two terms
\[g_{E}(0)=g_{E}^{(\varepsilon=0)}(0)+g_{E}^{(\varepsilon\neq 0) }(0), \tag{87}\] \[g_{E}^{(\varepsilon=0)}(0)=\fint_{BZ}\frac{d^{D}\mathbf{k}}{(2 \pi)^{D}}\frac{1}{E-\varepsilon(\mathbf{k})}. \tag{88}\]
From the previous chapter, we know that when \(E=E_{\rm mid}\) discrete part of the Green function is zero: \(g_{E}^{(\rm deloc)}(\mathbf{0})=0\). Hence, we write
\[g_{E}^{(\varepsilon=0)}(0)=\fint_{BZ}\frac{d^{D}\mathbf{k}}{(2 \pi)^{D}}\frac{1}{E-\varepsilon(\mathbf{k})}\approx\\ \approx\sum_{n}\frac{|\psi_{n}(0)|^{2}}{E_{\rm mid}(E)-\varepsilon _{n}}. \tag{89}\]
In order to evaluate singular part, \(g_{E}^{(\varepsilon\neq 0)}(\mathbf{0})\), we, again, introduce integer \(l\)
\[n=n_{\rm left}(E)+l,\quad\varepsilon_{n}=E_{\rm mid}(E)+(l-1/2) \delta_{1} \tag{90}\] \[E-\varepsilon_{n}=(E-E_{\rm mid}(E))-(l-1/2)\delta_{1},\quad E_{ \rm mid}(E)-\varepsilon_{n}=-(l-1/2)\delta_{1},\] (91) \[\delta_{1}\equiv\delta_{1}(E). \tag{92}\]
Therefore, we find
\[g_{E}^{(\varepsilon\neq 0)}(0)\approx\frac{2K^{D-1}f(0)^{2}}{ \sigma_{D}L}\sum_{l=-\infty}^{\infty}\left\{\frac{1}{[E-E_{\rm mid}(E)]-\delta _{1}(l-1/2)}-\frac{1}{-\delta_{1}(l-1/2)}\right\}=\\ =-\frac{2K^{D-1}f(0)^{2}}{\sigma_{D}L}\frac{1}{\delta_{1}(E)}\sum _{l=-\infty}^{\infty}\frac{\epsilon}{[(1/2+l)-\epsilon][1/2+l]}=-\frac{2K^{D- 1}f(0)^{2}}{\sigma_{D}}\nu_{1}(E)\pi\tan{(\pi\epsilon)}=-\pi\nu_{D}(E)\tan{( \pi\epsilon)} \tag{93}\]
Finally, we obtain
\[g_{E}^{(\rm deloc)}(0)=-\pi\nu_{D}(E)\tan{(\pi\epsilon)}, \tag{94}\]
It is interesting that \(D\)-dimensional DOS \(\nu_{D}(E)\) enters the final result.
Now, eigenenergies can be found from the following equation
\[1/V=F_{0}(E)-\pi\nu_{D}(E)\tan(\pi\epsilon) \tag{95}\]
or
\[\epsilon=\frac{1}{\pi}\arctan{\left(\frac{F_{0}(E)-1/V}{\pi\nu_{D}(E)}\right)}, \tag{96}\]
where \(F_{0}(E)=g_{E}^{(\varepsilon=0)}(0)\). Let us denote solution of
\[F_{0}(E)-1/V=0 \tag{97}\]
as \(E=E_{0}(V)\). Hence, when energy \(E\) is very close \(E_{0}\):
Figure 6: The evolution of spatial shape of the delocalized part of the wave function in \(D=1\) with the change of parameter \(\epsilon\). (a): \(\epsilon\)=0.49, (b): \(\epsilon\)=0.25, (c): \(\epsilon\)=0.1, (d): \(\epsilon\)=0.01
\(|E-E_{0}(V)|\ll 1\) we find
\[\epsilon=\frac{1}{\pi}\arctan\left(\frac{E-E_{0}(V)}{\pi\nu_{D}(E)} \left.\frac{dF_{0}}{dE}\right|_{E=E_{0}(V)}\right)=\\ =\frac{1}{\pi}\arctan\left(\frac{E-E_{0}(V)}{\Delta(E)}\right), \tag{98}\]
where
\[\Delta(E_{0})=\frac{\pi\nu_{D}(E_{0})}{b(E_{0})}\sim\frac{K^{D}}{E}\ll E,\quad \text{(since $\alpha<D/2$)}, \tag{99}\]
and
\[b(E_{0})=\left.\frac{dF_{0}}{dE}\right|_{E=E_{0}(V)}\sim 1. \tag{100}\]
When \(E_{0}\ll 1\) we get
\[b(E_{0})\approx b(0)=-\int_{-\pi}^{\pi}\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{ \varepsilon(k)^{2}}. \tag{101}\]
This integral safely converges \(k\to 0\), since \(\alpha<D/2\), and \(b(E_{0})\sim 1\) (see App. A).
Finally, we are in position to provide explicit expression for the eigenenergies spectrum. Every interval \((\varepsilon_{n},\varepsilon_{n+1})\) contains only one energy level \(E_{n}\)
\[E_{n}=E_{\text{mid}}(E_{n})+\epsilon\delta_{1}=E_{\text{mid}}(E_ {n})+\frac{\delta_{1}(E_{0})}{\pi}\arctan\left(\frac{E_{\text{mid}}(E_{n})-E_{ 0}}{\Delta(E_{0})}\right)\approx\\ \approx\begin{cases}\varepsilon_{n}+\frac{\delta_{1}(E_{0})\Delta (E_{0})}{\pi(E_{0}-E_{\text{mid}}(E_{n}))},&\quad E_{\text{mid}}(E_{n})<E_{0},\quad|E_{\text{mid}}(E_{n})-E_{0}|\gg\Delta(E_{0})\\ E_{\text{mid}}(E_{n})+\delta_{1}(E_{0})\left(\frac{E_{\text{mid}}(E_{n})-E_{0}}{ \pi\Delta(E_{0})}\right),&\quad|E_{\text{mid}}(E_{n})-E_{0}|\ll\Delta(E_{0}), \\ \varepsilon_{n+1}-\frac{\delta_{1}(E_{0})\Delta(E_{0})}{\pi(E_{\text{mid}}(E_{ n})-E_{0})},&\quad E_{\text{mid}}(E_{n})>E_{0},\quad|E_{\text{mid}}(E_{n})-E_{0}| \gg\Delta(E_{0})\end{cases} \tag{102}\]
When \(|E_{n}-E_{0}|\gg\Delta(E_{0})\) energy level \(E_{n}\) almost coincides with \(\varepsilon_{n}\) or \(\varepsilon_{n+1}\) and the corresponding wave function is almost unperturbed extended wave. When \(|E_{n}-E_{0}|\ll\Delta(E_{0})\) energy is very close to the middle of the interval \(E_{n}\approx E_{\text{mid}}(E_{n})\), which corresponds to the quasi-localized state.
### Full expression for the wave function
Let us now get back to the wave function. Since we are interested in the quasi-localized states with energies close to the \(E_{\text{mid}}\), we can expand relation (98) and plug it in the expression for the wave function. Hence, we obtain
\[\psi_{E}(r)=\left[1+u_{1}^{2}L+u^{2}L\right]^{-1/2}\times\\ \times\left(\widetilde{\Psi}_{E_{0}}(r)+u_{1}\sqrt{L}\psi_{n(E)}^ {\perp}(r)+u\sqrt{L}\tilde{\phi}_{\text{deloc}}(r)\right), \tag{103}\]
\[u_{1}=\sqrt{\frac{\sigma_{D}}{2L_{c}}},\quad u=C^{\prime}E^{\frac {1-D}{2n}}(E-E_{0})\\ C^{\prime}=-\frac{b(E_{0})2^{\frac{D+2}{2}}\Gamma^{\frac{3}{2}}\left( \frac{D}{2}+1\right)}{3^{\frac{1}{2}}D^{\frac{3}{2}}\pi^{\frac{D+2}{2}}} \tag{104}\]
where \(\widetilde{\Psi}_{E_{0}}(r)\sim r^{\alpha-D}\) - localized part of the quasi-localized wave function and \(\psi_{n(E)}^{\perp}(r)\) - its delocalized tail at \(r>r_{1}\) that exists even for \(E=E_{0}\):
\[\psi_{n(E)}^{\perp}(r)=r_{1}^{\alpha-D}\sqrt{\frac{2L_{c}}{L\sigma_{D}}}\frac{ \sin\left(Kr+\varphi_{D}\right)}{(Kr)^{\frac{D-1}{2}}}. \tag{105}\]
Each of the functions \(\widetilde{\Psi},\psi^{\perp},\tilde{\phi}_{\text{deloc}}\) are normalized to unity. We have also used the fact that three functions are orthogonal to each other (the overlap tends to zero as \(1/L\)).
The first two contributions to the overall normalization coefficient \(\left[1+u_{1}^{2}L+u^{2}L\right]^{-1/2}\) come from the quasi-localized part \(\Psi_{E_{0}}(r)\), while the third contribution arises due to deviation \(E-E_{0}\).
Hence, when \(|u|\sqrt{L}\ll 1\) and \(L\ll L_{c}(E)\) states (103) are effectively localized. There is at least one such state with \(E=E_{0}\) and \(u=0\). How many more of them are there? Effectively localized states should satisfy the following condition
\[|E-E_{0}|\ll\frac{E^{\frac{D-1}{2}}}{\sqrt{L}}\equiv\widetilde{\Delta}(E). \tag{106}\]
Therefore, there are \(M_{\text{loc}}\) more effectively localized states
\[M_{\text{loc}}\equiv\frac{\widetilde{\Delta}(E)}{\delta_{1}(E)}\sim\frac{E^{ \frac{D-1}{2\alpha}}LE^{\frac{1}{\alpha}}}{\sqrt{L}E}=\sqrt{\frac{L}{L_{c}}}\ll 1. \tag{107}\]
Hence, there is only one effectively localized state in the vicinity of the optimal fluctuation.
Inverse participation ratio
In this Section we will separately examine cases \(L\ll L_{c}\) (69), and the opposite one \(L\gg L_{c}\).
### IPR in the near tail: \(L\ll L_{c}\)
In this case \(u_{1}\sqrt{L}\ll 1\) and one can neglect the second term in (103). Then for arbitrary \(q\) and \(D\) IPR obtains the following form
\[P_{q}=\sum_{j}|\psi_{E}(j)|^{2q}\approx\frac{1+u^{2q}L^{D(1-q)+q}}{(1+u^{2}L)^ {q}}. \tag{109}\]
If one fixes \(q\), one immediately finds critical dimension
\[D_{\rm cr}=\frac{q}{q-1}. \tag{110}\]
If \(D<D_{\rm cr}\) it is possible to introduce two distinct characteristic lengths
\[\xi_{1}(E)\sim u^{-2}, \xi_{2}(E,q,D)\sim u^{-\frac{2q}{D(1-q)+q}}, \tag{111}\] \[1\ll\xi_{1}(E)\ll\xi_{2}(E,q,D), \tag{112}\]
Hence, IPR is given by the following relation
\[P_{q}(E,D)\approx\frac{1+u^{2q}L^{D(1-q)+q}}{(1+u^{2}L)^{q}}\sim \\ \sim\begin{cases}&1,\qquad L\ll\xi_{1},\\ &\\ &\left(\frac{\xi_{1}}{L}\right)^{q},\qquad\xi_{1}\ll L\ll\xi_{2},\\ &\\ L^{-D(q-1)},\qquad\xi_{2}\ll L\ll L_{c}.\end{cases} \tag{113}\]
Case \(D>D_{\rm cr}\) is much more surprising. Here \(\xi_{2}(E)\) does not exist, IPR (113) is as follows
\[P_{q}(E,D)\sim\begin{cases}&1,\qquad L\ll\xi_{1},\\ &\\ \left(\frac{\xi_{1}}{L}\right)^{q},\qquad\xi_{1}\ll L\ll L_{c}.\end{cases} \tag{114}\]
For example, when \(D=3\) and \(q=2\) the IPR large-\(L\) behavior is \(P_{2}\propto L^{-2}\) instead of the standard three-dimensional law \(P_{2}\propto L^{-3}\) even for energies far away from \(E_{0}\), i.e. \(\xi_{1}\ll L\).
If we define fractal dimension \(D_{q}\) according to
\[P_{q}\sim L^{-D_{q}(q-1)}. \tag{115}\]
then, in our case, we obtain
\[D_{q}=\frac{q}{q-1}\quad\text{ when }\quad q>\frac{D}{D-1} \tag{116}\]
When \(D>D_{\rm cr}\) it is easy to see from (116) that the fractal dimension \(D_{q}<D\).
### IPR in the far tail: \(L\gg L_{c}\)
Here \(u_{1}^{2}L\gg 1\) and, therefore
\[P_{q}\approx\frac{1+u_{1}^{2q}L^{D(1-q)+q}+u^{2q}L^{D(1-q)+q}}{(u_{1}^{2}L+u^ {2}L)^{q}}. \tag{117}\]
In high dimensions, \(D>D_{\rm cr}\), IPR, again, is fractal with the same fractal dimension
\[P_{q}(E,D)\sim\begin{cases}\left(\frac{1}{u_{1}^{2}L}\right)^{q},&\quad|E-E_ {0}|\ll E^{\frac{D-\alpha}{\alpha}},\\ \left(\frac{1}{u^{2}L}\right)^{q},&\quad|E-E_{0}|\gg E^{\frac{D-\alpha}{ \alpha}}.\end{cases} \tag{118}\]
Thus, we see that the localized part of the wave function can dominate IPR even when the state is not effectively localized (the norm is dominated by the delocalized tail).
## XI Conclusion
We have demonstrated that finite disordered systems with long range hopping indeed exhibit unusual properties. In addition to conventional localized states with negative energies that contribute to Lifshitz tails, the fluctuations of disorder in such systems support the existence of quasi-localized states with positive energies. The structure of such states is as follows: there is a strong short-range core, localized in the vicinity of a strong local fluctuation of disorder, and a weak oscillating tail that spans through the entire system. Under the condition (69) contribution from the localized part of the wave function dominates the norm. However, as the systems size increases, the contribution of the tail increases either and sooner or later it overcomes the contribution of the core. It happens because the long-range tails, however weak, decay too slowly and cannot be normalized in an infinite system. Thus, the quasi-localized states can only exist in finite systems.
Note that the quasi-localized states can be highly excited states: there may be a lot of extended states with lower energies.
Moreover, even when condition (69) is not satisfied, and the norm of the wave function is dominated by the
Figure 7: Three different energy regimes exist in 1D systems: the lowest energies allow quasi-localized states to exist.
tail, the behavior of the IPR \(P_{q}\) may still be determined by the localized core of the wave function. Then \(P_{q}\) exhibits unusual behavior in a wide range of energies away from the energy of a quasi-localized state: for certain values of \(q\) the character of \(P_{q}\) is "fractal".
Found states are robust to typical fluctuations of the random potential. Keeping that in mind, in 1D, that is in the simplest possible case, we can distinguish three different energy domains as follows (see Fig. 7):
* \(E\ll E_{c}\): here the quasi-localized states are formed on the continuum background of extended states.
* \(E_{c}\ll E\ll E_{c}^{\prime}\): here remnants of the quasi-localized states become extended but exhibit unusual "fractal" properties.
* \(E\gg E_{c}^{\prime}\): here all the states are localized owing to standard 1D localization by typical fluctuations, i.e. \(l_{E}\ll L\).
## Acknowledgements
We are indebted to M.V.Feigel'man for valuable discussions, to I.M.Khaymovich for pointing out multiple helpful references, and to L.Levitov for useful comments on the manuscript. This work was supported by the Basic Research Program of The Higher School of Economics.
|
2309.05894 | Fast Constraint Screening for Multi-Interval Unit Commitment | Power systems Unit Commitment (UC) problem determines the generator
commitment schedule and dispatch decisions for power networks based on
forecasted electricity demand. However, with the increasing penetration of
renewables and stochastic demand behaviors, it becomes challenging to solve the
large-scale, multi-interval UC problem in an efficient manner. The main
objective of this paper is to propose a fast and reliable scheme to eliminate a
set of redundant or inactive physical constraints in the high-dimensional,
multi-interval, mixed-integer UC problem, while the reduced problem is
equivalent to the original full problem in terms of commitment decisions. Our
key insights lie on pre-screening the constraints based on the load
distribution and considering the physical feasibility regions of multi-interval
UC problem. For the multistep UC formulation, we overcome screening
conservativeness by utilizing the multi-step ramping relationships, and can
reliably screen out more constraints compared to current practice. Extensive
simulations on both specific load samples and load regions validate the
proposed technique can screen out more than 80% constraints while preserving
the feasibility of multi-interval UC problem. | Xuan He, Jiayu Tian, Yufan Zhang, Honglin Wen, Yize Chen | 2023-09-12T00:38:35Z | http://arxiv.org/abs/2309.05894v1 | # Fast Constraint Screening for Multi-Interval Unit Commitment
###### Abstract
Power systems Unit Commitment (UC) problem determines the generator commitment schedule and dispatch decisions for power networks based on forecasted electricity demand. However, with the increasing penetration of renewables and stochastic demand behaviors, it becomes challenging to solve the large-scale, multi-interval UC problem in an efficient manner. The main objective of this paper is to propose a fast and reliable scheme to eliminate a set of redundant or inactive physical constraints in the high-dimensional, multi-interval, mixed-integer UC problem, while the reduced problem is equivalent to the original full problem in terms of commitment decisions. Our key insights lie on pre-screening the constraints based on the load distribution and considering the physical feasibility regions of multi-interval UC problem. For the multi-step UC formulation, we overcome screening conservativeness by utilizing the multi-step ramping relationships, and can reliably screen out more constraints compared to current practice. Extensive simulations on both specific load samples and load regions validate the proposed technique can screen out more than 80% constraints while preserving the feasibility of multi-interval UC problem.
## I Introduction
Obtaining accurate solutions for unit commitment in an efficient manner is crucial for ensuring reliable generation dispatch [1]. For transmission grid operations, Unit commitment (UC) problems are typically formulated as mixed integer programming (MIP) problems involving both discrete variables for generator statuses and continuous variables for dispatch levels. In particular, for the multi-interval UC problems, temporal constraints, such as ramping constraints, are incorporated to regulate the capacity at which generators can adjust their output levels in response to dynamic changes of electricity demand and system conditions. However, due to the NP-hard nature of such nonconvex MIP problems, the solution process can be exceedingly time-consuming [2, 3]. Such computation complexity can be further increased by the existence of numerous network constraints such as line flow limits [4, 5].
The need to accelerate the unit commitment (UC) solution process has prompted research into developing a surrogate UC model with fewer line flow limits while maintaining the solution equivalence of the resulting UC problem and the original UC problem. Such technique is backed up by the observation that only a subset of security constraints is binding in the real world systems. The process to identify the appropriate subset of line flow limits is termed _constraint screening_[6, 7]. [6] proposes to eliminate the constraints whose line flow cannot reach the boundary given load inputs and show that the feasible region will be the same as the original UC problem.
However, for the standard screening model [6, 8], screening strategies can be conservative, while many redundant or inactive constraints are still kept rather than screened out. [9] and [10] propose the cost-driven screening models to utilize the operating cost constraint to limit the line flow value range further. Yet most of the literature focus on single-step formulation of UC constraint screening, while ignoring the impact of temporal constraints such as generation ramping constraints on the feasible region of screening problems. On the other hand, relaxed binary variables of generator commitment schedule in the standard screening model for a load sample also enlarge the line flow value range [11]. The strong representation capability by machine learning (ML) models [12, 13] can be utilized to predict the value of the binary variables, i.e., the decisions of generator states, which shows potential for further integrating the predictions to the screening model to handle the loose line flow range issue.
Another concern is the computation costs in standard constraint screening setup, which come from solving the optimization-based screening problem for each network constraint and each electricity load instance. For the former, the ML models [14, 15, 16, 17, 18] directly use ML predictions to classify if a network constraint is redundant, while there is no guarantee the reduced UC is equivalent to the original one. For the latter, in fact, empirical evidence shows that the samples belonging to some typical load regions can have the same set of redundant constraints in their corresponding UC problems [11]. It is then sufficient to implement screening on the load region rather than working on individual load sample [19, 20].
To address such challenges, we develop one of the first multi-interval UC constraint screening models to narrow down the search space of line flow constraints reliably, leading to screening strategy with more efficient performance. Our key insights lie on integrating the temporal constraints, i.e., the ramping constraints to the standard screening model, and such approach can be applied for either given load region or individual load sample. The potential of utilizing the ML predictions of generator states to improve the screening efficiency is also explored. Specifically, we formulate a tractable linear programming problem for multi-interval UC, and prove that more inactive constraints can be eliminated without changing the feasible region of original UC problem. Moreover, our method can be flexibly integrated to screen constraints for either one specific load vector, or to achieve
solver warm-start (offline constraint screening beforehand to get the reduced problem which can be later used for real-time operation problem) for a given load region. We term these two cases as _sample-aware_[6] and _sample-agnostic_[11] constraint screening respectively. Specially, for the _sample-aware_ case, we further propose to make use of ML prediction on commitment schedule to better limit the search space of the constraint screening problem. In the sample-aware case on IEEE 39-bus system, our proposed method with ground truth generator states can achieve 95.3% screening rate, achieving a boost compared to the standard constraint screening [6] (82%). With partial predictions, the feasible rate of our method reach 84% while the solution gap is only 0.32%. In the sample-agnostic case on 118-bus, our procedure can also find the most compact form of UC problem after the more efficient screening process.
## II Multi-Interval UC Problem Formulation
In this paper, we assume the system operators need to decide both the ON/OFF statuses (the commitment schedule) as well as dispatch level for all generators. Herein we consider the day-ahead UC problem taking ramp constraints into consideration. For the \(T\) timesteps UC problem with \(n\) generators involved, the problem is formulated as
\[\min_{\mathbf{u},\mathbf{x},\mathbf{f}} \sum_{t=1}^{T}\sum_{i=1}^{n}c_{i}x_{i}(t)\] (1a) s.t. \[u_{i}\underline{x}_{i}\leq x_{i}(t)\leq u_{i}\bar{x}_{i},\quad \forall i,t \tag{1b}\] \[-\overline{\mathbf{f}}\leq\mathbf{K}\mathbf{f}(\mathbf{t})\leq \overline{\mathbf{f}},\quad\forall t\] (1c) \[\mathbf{x}(\mathbf{t})+\mathbf{A}\mathbf{f}(\mathbf{t})=\boldsymbol {\ell}(t),\;\forall t\] (1d) \[u_{i}(t)\in\{0,1\},\quad\forall i,t\] (1e) \[x_{i}(t)-x_{i}(t-1)\leq\mathsf{R}_{i}^{up}u_{i}(t-1)\] \[+\mathsf{R}_{i}^{su}(u_{i}(t)-u_{i}(t-1))+\bar{x}_{i}(1-u_{i}(t)) \;\forall i,t\] (1f) \[x_{i}(t-1)-x_{i}(t)\leq\mathsf{R}_{i}^{dn}u_{i}(t)\] \[+\mathsf{R}_{i}^{sd}(u_{i}(t-1)-u_{i}(t))+\bar{x}_{i}(1-u_{i}(t-1 ))\;\forall i,t. \tag{1g}\]
In the UC problem, we optimize over the generator statuses \(\mathbf{u}\), the generator dispatch \(\mathbf{x}\) and the line power flow \(\mathbf{f}\) to find the least-cost solutions with cost defined in the objective function (1a). \(c_{i}\) denotes the cost coefficient. Constraint (1b), (1c) and (1d) denotes the generation bound, the flow bound and the nodal power balance respectively. Note that the power flows are modeled as a DC approximation, while the phase angles are absorbed into the fundamental flows \(\mathbf{f}\in\mathbb{R}^{n-1}\)[21, 22]; \(K\) and \(\mathbf{A}\) map such fundamental flows to flow constraints and nodal power balance respectively. (1e) enforces the binary constraint of generator statuses, where \(u_{i}=1\) indicates the generator is on. (1f) and (1f) are the ramping constraints to enforce limitations on the speed at which generators are able to modify their output levels in response to fluctuations in demand or system conditions. \(R_{i}^{up},R_{i}^{su},R_{i}^{dn},R_{i}^{sd}\) are the upward limits, downward, start-up and shut-down capacity.
For the power system with numerous generators and networks, (1) can be a large-scale MILP problem, which is intractable to solve efficiently so as to satisfy the computation requirement of finding generation dispatch decisions. Meanwhile, there is a large number of inactive constraints, especially the inactive network constraints, giving the potential to simplify (1) by screening out such inactive constraints. Then it is possible to solve a reduced UC problem with fewer engineering constraints.
## III Multi-Interval Constraint Screening Model
### _Modeling of Inactive Constraints_
In this work, we follow the definition of inactive constraint firstly formulated by [6]. One constraint can be treated as inactive if it has no influence on the feasible region of the multi-interval UC problem, and can thus be eliminated as illustrated in Fig. 1. For the constraint screening problem, it is thus of interest to correctly screen out as many inactive constraints as possible. And ultimately, we want to tighten the feasible region and make it close to the original UC problem, so constraints like those marked in red in Fig. 1 can be screened out.
To formulate the feasible region mathematically, we use \(P\) to denote the feasible region defined by a constraint set \(C^{P}\). \(C^{P}\) is a subset of the original UC problem's constraint set. Then the actual feasible region of the original problem defined by (1b)-(1g) can be naturally a subregion of \(P\).
**Definition 1**: _A network constraint \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\leq\overline{\mathbf{f}}\) or \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\geq-\overline{\mathbf{f}}\) is defined inactive to \(P\) if and only if,_
\[P\supseteq P^{-\{j\}}; \tag{2}\]
_where \(P^{-\{j\}}\) is the region defined by the constraint set \(C^{P}\) eliminating \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\leq\overline{\mathbf{f}}_{j}\) or \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\geq-\overline{\mathbf{f}}_{j}\), and we denote this set as \(C^{P/j}\)._
### _Single-Step Constraint Screening_
To identify each inactive flow constraint, the standard optimization-based screening model [6] tries to evaluate whether the line \(j\) will be binding given the load input. In (3) we describe such formulation, which can be used to conduct
Fig. 1: Illustration of the inactive constraints corresponding to the different feasible region. By using our technique, it is possible to screen more constraints (line in red) compared to standard single timestep screening method.
the _sample-aware constraint screening_ for the \(j\)-th line at time step \(k\).
\[\max_{\mathbf{u}_{\mathbf{k}},\mathbf{x}_{\mathbf{k}},\mathbf{f}_{ \mathbf{k}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### _Sample-Aware Screening with Generator States_
Here _sample-aware_ means we implement constraint screening for a known load instance. As the binary variables make the basic screening model a MILP problem, we first consider relaxing the value of binary variables to \(u\in[0,1]\). Then, to further lower the model complexity, we replace the nodal balance constraints (5c) with the load balance constraints (6d) and (6e), and reduce the network constraints (5b) to (6c), as described in the following formulation (6).
**Corollary 1**: _Consider the following multi-step screening model with the relaxation of the binary variables, part of nodal balance constraints and network constraints:_
\[S^{*}_{aware_{j}}(k)= \max_{\mathbf{u}_{k},\mathbf{x}_{k},\mathbf{f}_{k}}\mathbf{K}_{ \mathbf{f}}(k)\] (6a) s.l. \[\mathbf{\underline{x}}\leq\mathbf{x}(\mathbf{t})\leq\mathbf{ \underline{\bar{f}}},\quad t\leq k,\] (6b) \[-\overline{\mathbf{f}}_{\mathcal{F}/j}\leq\mathbf{K}_{\mathcal{F}/ j}\mathbf{f}(\mathbf{k})\leq\overline{\mathbf{f}}_{\mathcal{F}/j},\quad t=k,\] (6c) \[\mathbf{x}(\mathbf{k})+\mathbf{Af}(\mathbf{t})=\boldsymbol{\ell} (t),\quad t=k,\] (6d) \[\sum x(t)=\sum l(t),\quad t\leq k,\] (6e) \[\eqref{eq:constraint_constraint_constraint_constraint_constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint constraint_ constraint constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint constraint_ constraint
\(\mathbf{\ell}\in\mathcal{L}\), while in the screening model the load sample is transformed to \(\mathbf{\ell}^{r}\).
**Corollary 4**: _Consider the following multi-step screening model for a specified load region:_
\[S_{R_{j}}^{*}(k) =\max_{\mathbf{u}_{k},\mathbf{x}_{k},\mathbf{f}_{k},\mathbf{f}_{k} }\mathbf{K}_{j}\mathbf{f}(k)\] (9a) s.t. \[\quad\mathbf{\bar{x}}\leq\mathbf{x}(\mathbf{t})\leq\mathbf{\bar{ x}},\quad t\leq k,\] (9b) \[\quad-\mathbf{\bar{f}}_{\mathcal{F}/j}\leq\mathbf{K}_{\mathcal{F }/j}\mathbf{f}(\mathbf{t})\leq\mathbf{\bar{f}}_{\mathcal{F}/j},\quad t=k,\] (9c) \[\quad\mathbf{x}(\mathbf{t})+\mathbf{Af}(\mathbf{t})=\mathbf{\ell}^{r }(t),\quad t=k,\] (9d) \[\quad\sum_{k}x(t)=\sum l^{r}(t),\quad t\leq k,\] (9e) \[\quad\mathbf{\ell}^{r}\in\mathcal{L},\] (9f) \[\quad\eqref{eq:S_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_FF_F_F_F_F_F_F_F_FF_F_F_F_F_FF_F_F_F_FF_F_FF_F_F_FF_F_F_F_F_FF_F_FF_FF_FF_F_FF_F_FF_F_FF_F_FF_F_FF_F_FF_F_FF_FF_F_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FFF_FFF_F
### _Simulation Results_
_Sample-aware screening tasks_: We consider three scenarios: i) without generator statuses predictions, ii) with partial predictions, and iii) with ground truth are considered to verify the effectiveness of the proposed method. When there is no generator statuses predictions, our multi-step method reduces 81.16% of line limits as shown in Table.II. With partial predictions, the number of remaining constraints of multi-step is only 69% of single-step method, and the CPU time solving the original problem is 1.103 times faster than single-step method according to Table IV. It can be seen that multi-step method can always eliminate more inactive constraints, and thus taking less time to solve the reduced UC problem. Besides, the infeasible rate of the original problem is only 16%, and the gap of objective value between the reduced and the original UC problem is only 0.32%. Note that the bias between the solution of binary variables obtained by the screening model and the UC problem does not cause the infeasible case due to the line limits misidentification, which verifies Corollary 3. According to Fig. 4, the case with the ground truth can eliminate the most inactive constraints, while with the partial predictions the remaining constraints can still be sufficiently reduced compared to the case without prediction.
_Sample-agnostic screening tasks_: In this setting, the cases both on 39-bus and 118-bus systems with 20%, 50%, and 80% load variations are considered to verify the scalability of the proposed method. As shown in Fig. 5, the number of remaining constraints decrease by 60 and 100 for 39-bus and 118-bus system over three load ranges respectively. It can be seen that with larger load region, the number of remaining constraints increases. This may be due to the increasing patterns of inactive constraints with wider load variation range, i.e., the percentage of the inactive constraints decreases when widening the load variation. Results above validate that the proposed multi-step screening models can help boost screening performance accurately in both sample-aware and sample-agnostic settings. This demonstrates both the possibility and the necessity of including the practical multi-step ramping information into the screening problem.
the searching space from the explicit constraints of UC problem, in the future work we would like to seek theoretical understandings about the implicit decision spaces impacting the screening and final UC solution results.
|
2309.14741 | Rethinking Session Variability: Leveraging Session Embeddings for
Session Robustness in Speaker Verification | In the field of speaker verification, session or channel variability poses a
significant challenge. While many contemporary methods aim to disentangle
session information from speaker embeddings, we introduce a novel approach
using an additional embedding to represent the session information. This is
achieved by training an auxiliary network appended to the speaker embedding
extractor which remains fixed in this training process. This results in two
similarity scores: one for the speakers information and one for the session
information. The latter score acts as a compensator for the former that might
be skewed due to session variations. Our extensive experiments demonstrate that
session information can be effectively compensated without retraining of the
embedding extractor. | Hee-Soo Heo, KiHyun Nam, Bong-Jin Lee, Youngki Kwon, Minjae Lee, You Jin Kim, Joon Son Chung | 2023-09-26T08:09:30Z | http://arxiv.org/abs/2309.14741v1 | # Rethinking Session Variability: Leveraging
###### Abstract
In the field of speaker verification, session or channel variability poses a significant challenge. While many contemporary methods aim to disentangle session information from speaker embeddings, we introduce a novel approach using an additional embedding to represent the session information. This is achieved by training an auxiliary network appended to the speaker embedding extractor which remains fixed in this training process. This results in two similarity scores: one for the speakers information and one for the session information. The latter score acts as a compensator for the former that might be skewed due to session variations. Our extensive experiments demonstrate that session information can be effectively compensated without retraining of the embedding extractor.
Hee-Soo Heo\({}^{1}\), Kikhyun Nam\({}^{2}\), Bong-Jin Lee\({}^{1}\), Youngki Kwon\({}^{1}\),
Minjae Lee\({}^{1}\), You Jin Kim\({}^{1}\), Joon Son Chung\({}^{2}\)\({}^{1}\)NAVER Cloud Corporation, South Korea
\({}^{2}\)Korea Advanced Institute of Science and Technology, South Korea Speaker verification, speaker embedding, session information
## 1 Introduction
In the evolving domain of speech processing, speaker verification plays a crucial role, having various real-world applications ranging from voice-based security systems to personalised speech assistants. Central to robust speaker verification is the extraction of speaker embeddings, which encapsulate the unique characteristics of an individual's voice [1, 2, 3]. However, these embeddings are susceptible to extraneous information, largely influenced from the recording environment. Variabilities in recording devices, ambient noise, room acoustics, and other session-related factors can significantly affect the accuracy of these embeddings, creating misleading similarities even among distinct speakers in similar recording situations [4, 5].
Historically, when the i-vector approach was prevalent in the speaker embedding space, techniques such as linear discriminant analysis (LDA) and within-class covariance normalization (WCCN) were employed as countermeasures to diminish these unexpected session similarities [1, 6]. With the advances of deep learning and its application to this domain, efforts have shifted towards disentangling speaker information from session information directly within the embedding [7, 5, 8]. Various strategies have been studied in this direction - while some leverage the adversarial approach, others design novel loss functions to achieve the same goal [9]. However, a clear problem with these methods is that while trying to separate session-related information from speaker-specific details, important characteristics of the speaker might be lost. In simpler terms, in the process of removing unwanted session information, one might also unintentionally remove features that help identify the speaker.
In light of these challenges, this paper introduces an alternative approach. Instead of disentangling session-related information from the embedding, we present a framework to compensate for it at the score level. Our methodology capitalises on the use of an auxiliary network, seamlessly appended to the original speaker embedding extractor. The auxiliary network is designed to represent session information found within speaker embeddings. A key facet of our framework ensures that the primary speaker embedding extractor remains fixed during this process. Consequently, our system yields a twofold output; a similarity score reflecting speaker characteristics and another gauging session attributes. The latter, acting as a compensator, has the potential to rectify any discrepancies in the speaker score induced by analogous or differing session conditions. Our empirical evaluations, spanning various model architectures and evaluation configurations, underscore the feasibility of session compensation without the need for retraining the original embedding extractor.
The paper is organised as follows. Section 2 introduces the proposed framework. Experiments and result analysis are presented in Section 3, followed by conclusion in Section 4.
## 2 Framework
for Session Variability Compensation
In this section, we present a novel framework specifically designed to address and compensate for session variability in speaker verification tasks.
### Speaker Embedding Extraction
For this study, we leverage pre-trained embedding extractors, drawing from methods that have proven efficacy in conventional recipes. Specifically, we have evaluated three models that rep
resent a diverse cross-section of state-of-the-art architectures. These models are ECAPA-TDNN [10], RawNet3 [11], and MFA-Conformer-based speaker embedding extractors [12, 13].
### Session Embedding Extraction
Within the domain of speaker verification, speaker embeddings efficiently capture the intrinsic attributes of a speaker's speech. However, these embeddings may also contain subtle information specific to the recording session, like background noise or recording device characteristics. Recognising the need to isolate such session-specific nuances from the core speaker features, we introduce the session network.
**Network architecture.** This network is attached to the speaker embedding network. Simplistically composed of several fully-connected layers, drop-out and GELU activation [14], the session network's primary role is to extract session information contained within the speaker embedding. Figure 1-(a) shows the detailed composition of the session network. It's designed to differentiate between the inherent speaker characteristics and the variabilities introduced by different recording sessions.
**Training strategy.** For effective extraction of session information, it's paramount to train the network using a specially designed loss function. In addition, utilising datasets such as VoxCelebs [15, 16], which offers multiple sessions for individual speakers, is essential. For the session network, the training data comprises pairs - both positive and negative - drawn from the VoxCeleb datasets. These pairs are constructed by pairing two utterances. First, utterances for a positive pair stem from a same session and a same speaker, with identical augmentation techniques applied. This setup ensures that any discrepancy in the embeddings is predominantly due to session variations. Conversely, a negative pair includes two utterances from the same speaker but from different sessions, with distinct augmentations applied. This highlights the impact of session differences manifested within speaker embeddings. To elaborate further, consider a speaker denoted as \(i\), randomly selected from our dataset. For our training, we aim to consider the speaker's utterances across two distinct sessions. Thus, for each chosen session, two random utterances are selected. This process gives us a notation, \(u_{i,s,u}\)\(s\)\(\in\) {0,1},\(u\)\(\in\) {0,1}, where \(i\) stands for the selected speaker, \(s\) denotes the session and \(u\) indicates the utterance. Now, for a definition of the loss function, we consider all possible combinations of sessions (\(s\)) and utterances (\(u\)). Our objective is to compute a loss value, \(\mathcal{L}\), which would measure the difference or similarity between these combinations. This loss is determined as:
\[\mathcal{L}\!=\!\begin{cases}1\!-\!S(se(u_{i,s1,u1}),\!se(u_{i,s2,u2})),&\text {if }s1==s2\\ S(se(u_{i,s1,u1}),\!se(u_{i,s2,u2})),&\text{otherwise}\end{cases} \tag{1}\]
where \(S(\cdot,\cdot)\) is a function indicating cosine similarity between two embeddings and \(se(u)\) is session embedding from utterance \(u\). It's worth noting that we do not consider pairs from different speakers while training the session network, ensuring the focus remains strictly on session variability. The session information is directly inferred from the video ID in the VoxCeleb datasets. In our context, two utterances are considered to be from the same session if they originate from an identical video.
### Speaker Verification Using the Proposed Framework
In this section, we present our speaker verification procedure underpinned by our novel framework. In our study, we consider each verification trial to be constructed from a pair of utterances. From each of these utterances, two types of embeddings can be extracted: one that represents the characteristics of the speaker (the speaker embedding) and another that embodies the particularities of the recording session (the session embedding).
Figure 1: The session compensating framework for speaker verification. (a) Illustration of the session network. The network receives speaker embeddings and, via multiple pre-norm residual blocks, produces session embeddings. (b) Outline of the speaker verification process within the proposed framework: Upon receiving two utterances, the system extracts corresponding session and speaker embeddings. Similarities between these embeddings are then calculated. The computed similarities are subsequently input into the Q-stack classifier to determine whether the two utterances originate from a same speaker or two distinct speakers.
**Score-level compensator.** Once we have these embeddings, we can measure how similar they are. We compare the speaker embeddings from both utterances to get a "speaker similarity" score. This value essentially offers a metric that quantifies how alike the two utterances are, based on the characteristics of the speakers. On a parallel track, the session similarity is determined through the cosine similarity of the two session embeddings. This similarity shows how alike the two recordings are, based just on details from the recording session. Having obtained these similarities, the final step is to integrate them into a composite score that would be instrumental for verification. The formula we propose for this is:
\[\mathit{score}\!=\!spk\!-\!w\!*\!sess, \tag{2}\]
where \(spk\) and \(sess\) indicate speaker and session similarities, respectively, and \(w\) stands as a weighting factor for the session similarity. By subtracting a weighted session similarity from the speaker similarity, we aim to rectify any biases present in the speaker similarity attributed to session-related variations. Thus, the goal is to compensate for the session-induced biases, ensuring that the speaker's inherent characteristics shine through without the unexpected influence of session-specific attributes. To discern the impact of the session similarity on speaker verification, we carried out simple experiments utilising embeddings derived from the three embedding extractors. The focal point of this experiment was to adjust a weight value, and subsequently, observe how it influenced the performance of speaker verification. We conducted our tests using the VoxCeleb1 original test set, and the results are shown in Figure 2. The results reveal that simple action of subtracting the session similarity can reduce the error in speaker verification.
**Q-stack-based compensator.** Nonetheless, there exists a limitation to the above approach. The foundational premise of the approach is predicated on the assumption that the correlation between the speaker and session similarities is linear. However, in practical scenarios, this relationship might exhibit a more complex nature, suggesting the necessity for a sophisticated approach to accurately compensate for these interactions. To address this, we utilised an additional classifier which takes in both the speaker and session similarities and makes a binary decision. Essentially, it determines whether the two utterances originate from the same speaker or not. This new approach allows us to capture the non-linear relationship between the two similarities. The concept of this classifier is derived from a framework termed "Q-stack" [17]. The Q-stack classifier is employed to process two separate sets of similarities derived from two utterances, with the primary objective of deciding whether these utterances are from an identical speaker or not. The operation of the Q-stack-based framework is as follows. First, it takes in \(200\) similarities; half represents speaker similarities, and the other half stands for session similarities. These specific quantities originate from the well-known VoxCeleb trainer's recipe 1. This procedure extracts \(10\) embeddings from an individual utterance through a sliding window technique. Consequently, when comparing a pair of utterances, the possible combination results in \(10\!\times\!10\) similarities, leading to a combined total of \(100\) similarities for each type of embedding. For a more detailed architecture of the Q-stack, it is structured with three fully-connected layers, drop-out, and non-linear activation. These layers consist of \(400\) nodes, except the output layer with only two nodes. All hidden nodes are activated by leaky ReLU function for non-linearity. Figure 1-(b) shows the overall operation of the proposed framework, including the structure of the Q-stack classifier.
Footnote 1: [https://github.com/clovai/voxceleb_trainer](https://github.com/clovai/voxceleb_trainer)
## 3 Experiments
Experiments were conducted to evaluate the proposed speaker verification framework on four independent datasets. The first two subsections describe implementation details and evaluation protocols across all experiments, while subsequent subsections describe experiments on the proposed system configuration.
### Implementation details
For the evaluation of the proposed system, various datasets and models were employed. We selected multiple datasets for the training process: VoxCeleb1&2 [16, 18], VOiCES [19], CommonVoice [20] and telephone speeches from NIST SRE corpora. ECAPA-TDNN and RawNet3 models were trained using the VoxCeleb1&2 datasets. The Conformer-based system was trained leveraging the VoxCeleb1&2, NIST SRE 2004, 2006, and 2008 [21, 22], and CommonVoice datasets. The Q-stack system, distinctively, was trained on the test set of the VOiCES dataset. For augmentation, we use reverberations and noises from simulated RIRs and MUSAN datasets [23, 24]. Augmentation configurations follow that of [25].
Figure 2: Variation in speaker verification performance on the original VoxCeleb1 test set for three distinct embedding extractors. The graph shows the influence of session similarity(\(sess\))’s weight \(w\) on each extractor’s performance. A clear trend emerges, highlighting the role of session similarity as a compensatory across all models evaluated.
### Evaluation protocol
We evaluated performance using the VoxCeleb1 original test set (Vox1-O), 10sec-10sec protocol of NIST SRE 2010 evaluation (N-SRE) [26], and unique combined datasets. The initial evaluation of our system was carried out using two primary datasets: Vox1-O and N-SRE. These datasets contain audio data from varied sources and were chosen because they internally include session variability. To further evaluation, we introduced two custom datasets, VN-Mix and VC-Mix, crafted to test the systems' performance under challenging scenarios. First, VN-Mix (VoxCeleb and NIST) was composed of trials from Vox1-O and N-SRE. A notable aspect of this combination is the intrinsic domain difference between the two datasets. Specifically, N-SRE includes telephone speech while Vox1-O contains YouTube video clips. Given this contrast in source domains, it's hypothesised that a similarity bias might arise due to these inherent differences. For VC-Mix (VoxCeleb and VoxConverse), we combined positive pairs from Vox1-O with negative pairs from the "single" protocol of VoxConverse [27], as referenced in [28]. The positive pairs from Vox1-O comprise utterances from multiple sessions. In contrast, the negative pairs from VoxConverse are restricted to a singular session. This composition suggests the challenge, presenting both hard positive and negative pairs. In simple words, VC-Mix combines two types of pairs: one with the same speaker from different sessions and another with different speakers from a single session. The structure of VC-Mix was inspired by the dataset used in the VoxSRC 2022 challenge [29]. All telephone speech is up-sampled from 8kHz to 16kHz. The performance metric used to compare the models' performance was the well-known equal error rate (EER).
### Comparison with single system
In Table 1, we presented a comprehensive comparison of the baseline system against our proposed systems across varied models and evaluation datasets. A key observation was the robustness and enhancement in EER offered by the proposed systems, which use session embeddings. Focusing on the "score comp" row, the results show the positive impact of session compensation using equation (2). The value of the weighting factor \(w\) was determined using test trials from the VOICES dataset. Furthermore, the "Q-stack" row introduces further improvement from an additional classifier. This suggests that the classifier helps model a non-linear relationship between session and speaker similarities.
### Comparison with ensemble system
Table 2 shows the impact of different ensemble techniques on model performance. A conventional ensemble averages multiple scores from various models. However, with our Q-stack system, this ensemble is more sophisticated. Instead of merely averaging, it inputs scores from different models in unison. In particular, we increased the number of input scores from \(200\) to \(600\) when combining the three models. The experimental results highlighted the superior performance of the Q-stack-based ensemble, especially on the N-SRE dataset and the VN-Mix containing the corresponding dataset. Conventional ensemble techniques, on the other hand, exhibited a decrement in performance on the N-SRE dataset, attributed to some models' limited exposure to telephone speech during their training.
## 4 Conclusion
In the domain of speaker verification, session variability is a well-known factor that can lead to performance degradation. Traditional methods often aim to modify or enhance the speaker embedding to handle this issue. Contrary to this, we suggest a novel approach; rather than adjusting the speaker embedding, we propose that session information should be treated as a separate entity. Comprehensive experiments, spanning a variety of models and datasets, demonstrate that the proposed method not only mitigates the effects of session variability but also has valuable implications for model ensemble and score calibration.
\begin{table}
\begin{tabular}{l|c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{EER(\%)} & \multicolumn{3}{c|}{RawNet3} & \multicolumn{3}{c|}{ECAPA-TDNN} & \multicolumn{3}{c}{Conformer} \\ & Vox1-O & N-SRE & VN-Mix & VC-Mix & Vox1-O & N-SRE & VN-Mix & VC-Mix & Vox1-O & N-SRE & VN-Mix & VC-Mix \\ \hline Baseline & 1.11 & 13.52 & 10.51 & 3.32 & 0.77 & 11.29 & 6.90 & 2.17 & 0.70 & 8.70 & 3.48 & 1.99 \\ \hline Score comp & 1.12 & 13.33 & 8.91 & 3.05 & 0.75 & 10.92 & 5.84 & 2.02 & 0.69 & 8.58 & 3.43 & 1.88 \\ Q-stack & 1.06 & 12.98 & 7.34 & 3.03 & 0.71 & 10.64 & 4.22 & 1.98 & 0.65 & 8.39 & 3.34 & 1.51 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison of the performances using different models and evaluation sets. “Baseline” shows results from the usual speaker embedding. “Score comp” shows the outcomes when session variability is compensated at the score level. “Q-stack” denotes results when session variability is addressed using session embedding complemented by an additional classifier.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline EER(\%) & Vox1-O & N-SRE & VN-Mix & VC-Mix \\ \hline Single best & 0.70 & 8.70 & 3.48 & 1.99 \\ Averaging scores & 0.63 & 8.88 & 5.16 & 1.97 \\ Proposed & 0.56 & 8.14 & 3.17 & 1.44 \\ \hline \hline \end{tabular}
\end{table}
Table 2: A comparison of the effect of the ensemble methods. “Single best” shows the top-performing model on its own. “Averaging scores” displays results when we combine scores from several models the usual way. “Proposed” gives results using our new ensemble method with Q-stack. |
2309.04552 | Motion Compensated Unsupervised Deep Learning for 5D MRI | We propose an unsupervised deep learning algorithm for the motion-compensated
reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated
free-breathing 5D MRI simplifies the scan planning, improves patient comfort,
and offers several clinical benefits over breath-held 2D exams, including
isotropic spatial resolution and the ability to reslice the data to arbitrary
views. However, the current reconstruction algorithms for 5D MRI take very long
computational time, and their outcome is greatly dependent on the uniformity of
the binning of the acquired data into different physiological phases. The
proposed algorithm is a more data-efficient alternative to current
motion-resolved reconstructions. This motion-compensated approach models the
data in each cardiac/respiratory bin as Fourier samples of the deformed version
of a 3D image template. The deformation maps are modeled by a convolutional
neural network driven by the physiological phase information. The deformation
maps and the template are then jointly estimated from the measured data. The
cardiac and respiratory phases are estimated from 1D navigators using an
auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired
from two subjects. | Joseph Kettelkamp, Ludovica Romanin, Davide Piccini, Sarv Priya, Mathews Jacob | 2023-09-08T18:46:42Z | http://arxiv.org/abs/2309.04552v1 | # Motion Compensated Unsupervised Deep Learning for 5D MRI+
###### Abstract
We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects.
Keywords:Free Running MRI 5D MRI Cardiac MRI.
## 1 Introduction
Magnetic Resonance Imaging (MRI) is currently the gold standard for assessing cardiac function. It provides detailed images of the heart's anatomy and enables accurate measurements of parameters such as ventricular volumes, ejection fraction, and myocardial mass. Current clinical protocols, which rely on serial breath-held imaging of the different cardiac slices with different views, often require long scan times and are associated with reduced patient comfort. Compressed sensing [1], deep learning [10, 6], and motion-compensated
approaches [13] were introduced to reduce the breath-hold duration in cardiac CINE MRI. Unfortunately, many subject groups, including pediatric and older subjects, cannot comply with even the shorter breath-hold durations.
5D free-breathing MRI approaches that rely on 3D radial readouts [3, 11] have been introduced to overcome the above challenges. These methods resolve the respiratory and cardiac motion from either the center of k-space or Superior-Inferior (SI) k-space navigators. The k-space data is then binned into different cardiac/respiratory phases and jointly reconstructed using compressed sensing. The main benefit of this motion-resolved strategy is the ability to acquire the whole heart with isotropic spatial resolution as high as 1mm\({}^{3}\). This approach allows the images to be reformatted into different views to visualize specific anatomical regions at different cardiac and/or respiratory phases. Despite the great potential of 5D MRI, current methods have some challenges that limit their use in routine clinical applications. Firstly, the motion-resolved compressed sensing reconstruction is very computationally intensive, and it can take several hours to have a dynamic 3D volume. And secondly, compressed sensing reconstructions require fine tuning of several regularization parameters, which greatly affect the final image quality, depending on the undersampling factor and the binning uniformity.
The main focus of this work is to introduce a motion-compensated reconstruction algorithm for 5D MRI. The proposed approach models the images at every time instance as a deformed version of a static image template. Such an image model may not be a good approximation in 2D schemes [13], where the organs may move in and out of the slice. However, the proposed model is more accurate for the 3D case. We introduce an auto-encoder to estimate the cardiac and respiratory phases from the superior-inferior (SI) k-t space navigators. We disentangle the latent variables to cardiac and respiratory phases by using the prior information of the cardiac and respiratory rates. The latent variables allow us to bin the data into different cardiac and respiratory phases. We use an unsupervised deep learning algorithm to recover the image volumes from the clustered data. The algorithm models the deformation maps as points on a smooth low-dimensional manifold in high dimensions, which is a non-linear function of the low-dimensional latent vectors. We model the non-linear mapping by a Convolutional Neural Network (CNN). When fed with the corresponding latent vectors, this CNN outputs the deformation maps corresponding to a specific cardiac or respiratory phase. We learn the parameters of the CNN and the image template from the measured k-t space data. We note that several manifold based approaches that model the images in the time series by a CNN were introduced in the recent years. [15, 5, 9]. All of these methods rely on motion resolved reconstruction, which is conceptually different from the proposed motion compensated reconstruction.
We validate the proposed scheme on cardiac MRI datasets acquired from two healthy volunteers. The results show that the approach is capable of resolving the cardiac motion, while offering similar image quality for all the different
phases. In particular, the motion-compensated approach can combine the image information from all the motion states to obtain good quality images.
## 2 Methods
### Acquisition scheme
In vivo acquisitions were performed on a 1.5T clinical MRI scanner (MAGNETOM Sola, Siemens Healthcare, Erlangen, Germany). The free-running research sequence used in this work is a bSSFP sequence, in which all chemically shift-selective fat saturation pulses and ramp-up RF excitations were removed, in order to reduce the specific absorption rate (SAR) and to enable a completely uninterrupted acquisition [8]. K-space data were continuously sampled using a 3D golden angle kooshball phyllotayis trajectory [7], interleaved with the acquisition of a readout oriented along the superior-inferior (SI) direction for cardiac and respiratory self-gating [11]. The main sequence parameters were: radio frequency excitation angle of 55 with an axial slab-selective sinc pulse, resolution of 1.1 mm3, FOV of 220 mm3, TE/TR of 1.87/3.78 ms, and readout bandwidth of 898 Hz/pixel. The total fixed scan time was 7:58 minutes.
### Forward model
We model the measured k-space data at the time instant \(t\) as the multichannel Fourier measurements of \(\boldsymbol{\rho}_{t}=\boldsymbol{\rho}(\mathbf{r},t)\), which is the image volume at the time instance \(t\):
\[\mathbf{b}_{t}=\underbrace{\mathbf{F}_{\mathbf{k}_{t}}\,\mathbf{C}\,\boldsymbol {\rho}_{t}}_{\mathcal{A}_{t}(\boldsymbol{\rho}_{t})} \tag{1}\]
Here, \(\mathbf{C}\) denotes the multiplication of the images by the multi-channel coil sensitivities, while \(\mathbf{F}_{\mathbf{k}}\) denotes the multichannel Fourier operator. \(\mathbf{k}_{t}\) denotes the k-space trajectory at the time instant \(t\).In this work, we group 22 radial spokes corresponding to a temporal resolution of 88 ms.
An important challenge associated with the bSSFP acquisition without intermittent fat saturation pulses is the relatively high-fat signal compared to the myocardium and blood pool. Traditional parallel MRI and coil combination strategies often result in significant streaking artifacts from the fat onto the myocardial regions, especially in the undersampled setting considered in this work. We used the coil combination approach introduced in [4] to obtain virtual channels that are maximally sensitive to the cardiac region. A spherical region covering the heart was manually selected as the region of interest (ROI), while its complement multiplied by the distance function to the heart was chosen as the noise mask. We chose the number of virtual coils that preserve 75% of the energy within the ROI. This approach minimizes the strong fat signals, which are distant from the myocardium. We used the JSENSE algorithm [14, 12] to compute the sensitivity maps of the virtual coils.
### Image and motion models
The overview of the proposed scheme is shown in Fig. 1. The recovery of \(\rho_{t}\) from very few of their measurements \(\mathbf{b}_{t}\) is ill-posed. To constrain the recovery, we model \(\mathbf{\rho}_{t}\) as the deformed version of a static image template \(\mathbf{\eta}(\mathbf{r})\):
\[\rho(\mathbf{r},t)=\mathcal{I}\left[\mathbf{\eta},\phi_{t}(\mathbf{r})\right] \tag{2}\]
Here, \(\mathbf{\phi}_{t}\) is the deformation map and the operator \(\mathcal{I}\) denotes the deformation of \(\mathbf{\eta}\). We implement (2) using cubic Bspline interpolation. This approach allows us to use the k-space data from all the time points to update the template, once the motion maps \(\mathbf{\phi}_{t}\) are estimated.
Classical MoCo approaches use image registration to estimate the motion maps \(\mathbf{\phi}_{t}\) from approximate (e.g. low-resolution) reconstructions of the images \(\rho(\mathbf{r},t)\). However, the quality of motion estimates depends on the quality of the reconstructed images, which are often low when we aim to recover the images at a fine temporal resolution (e.g. 88 ms).
We propose to estimate the motion maps directly from the measured \(k-t\) space data. In particular, we estimate the motion maps \(\mathbf{\phi}_{t}\) such that the multi-channel measurements of \(\rho(\mathbf{r},t)\) specified by (2) match the measurements \(\mathbf{b}_{t}\). We also estimate the template \(\mathbf{\eta}\) from the k-t space data of all the time points. To constrain the recovery of the deformation maps, we model the deformation maps as the output of a convolutional neural network
\[\phi_{t}=\mathcal{G}_{\theta}[\mathbf{z}_{t}],\]
in response to low-dimensional latent vectors \(\mathbf{z}_{t}\). Here, \(\mathcal{G}_{\theta}\) is a convolutional neural network, parameterized by the weights \(\theta\). We note that this approach constrains the deformation maps as points on a low-dimensional manifold. They are obtained as non-linear mappings of the low-dimensional latent vectors \(\mathbf{z}_{t}\), which capture the motion attributes. The non-linear mapping itself is modeled by the CNN.
### Estimation of latent vectors from SI navigators
We propose to estimate the latent vectors \(\mathbf{z}_{t}\) from the SI navigators using an auto-encoder. In this work, we applied a low pass filter with cut-off frequency of 2.8 Hz to the SI navigators to remove high-frequency oscillations. Similarly, an eighth-degree Chebyshev polynomial is fit to each navigator voxel and is subtracted from the signal to remove drifts.
The auto-encoder involves an encoder that generates the latent vectors \(\mathbf{z}_{t}=\mathcal{E}_{\varphi}(\mathbf{y}_{t})\), are the navigator signals. The decoder reconstructs the navigator signals as \(\mathbf{y}_{t}=\mathcal{D}_{\psi}(\mathbf{z}_{t})\). In this work, we restrict the dimension of the latent space to three, two corresponding to respiratory motion and one corresponding to cardiac motion. To encourage the disentangling of the latent vectors to respiratory and cardiac signals, we use the prior information on the range of cardiac and
respiratory frequencies as in [2]. We solve for the auto-encoder parameters from the navigator signals of each subject as
\[\{\varphi^{*},\psi^{*}\}=\arg\min_{\varphi,\psi}\left\|\mathcal{F}\left\{D_{\psi} \left(\underbrace{\mathcal{E}_{\varphi}(\mathbf{Y})}_{\mathbf{z}}\right)- \mathbf{Y}\right\}\right\|_{I=1}+\lambda\ \left\|\mathbf{Z}\bigotimes\mathcal{B}\right\|_{2}^{2} \tag{3}\]
Here, \(\mathbf{Z}\in\mathbb{R}^{3\times T}\) and \(\mathbf{Y}\) are matrices whose columns are the latent vectors and the navigator signals at different time points. \(\mathcal{F}\) is the Fourier transformation in the time domain. \(\bigotimes\) denotes the convolution of the latent vectors with band-stop filters with appropriate stop bands. In particular, the stopband of the respiratory latent vectors was chosen to be \(0.05-0.7\) Hz, while the stopband was chosen as the complement of the respiratory bandstop filter Hz for the cardiac latent vectors. We observe that the median-seeking \(\ell_{1}\) loss in the Fourier domain is able to offer improved performance compared to the standard \(\ell_{2}\) loss used in conventional auto-encoders.
### Motion compensated image recovery
Once the auto-encoder parameters \(\varphi,\psi\) described in (3) are estimated from the navigator signals of the subject, we derive the latent vectors as \(\mathbf{Z}=\mathcal{E}_{\varphi^{*}}(\mathbf{Y})\). Using the latent vectors, we pose the joint recovery of the static image template \(\boldsymbol{\eta}(\mathbf{r})\) and the deformation maps as
\[\{\boldsymbol{\eta}^{*},\theta^{*}\}=\arg\min_{\boldsymbol{\eta},\theta}\sum_ {t=1}^{T}\|\mathcal{A}_{t}\left(\rho(\mathbf{r},t)\right)-\mathbf{b}_{t}\|^{2 }\ \ \text{where}\ \ \rho(\mathbf{r},t)=\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[ \mathbf{z}_{t}]\right) \tag{4}\]
The above optimization scheme can be solved using stochastic gradient optimization. Following optimization, one can generate real-time images shown in Fig. 3 and Fig. 3 as \(\mathcal{I}\left(\boldsymbol{\eta}^{*},\mathcal{G}_{\theta^{*}}[\mathbf{z}_{t }]\right)\).
The optimization scheme described in (4) requires \(T\) non-uniform fast Fourier transform steps per epoch. When the data is recovered with a high temporal resolution, this approach translates to a high computational complexity. To reduce computational complexity, we introduce a clustering scheme. In particular, we use k-means clustering to group the data to \(N<<T\) clusters. This approach allows us to pool the k-space data from multiple time points, all with similar latent codes.
\[\{\boldsymbol{\eta}^{*},\theta^{*}\}=\arg\min_{\boldsymbol{\eta},\theta}\sum_ {n=1}^{N}\|\mathcal{A}_{n}\left(\rho(\mathbf{r},n)\right)-\mathbf{b}_{n}\|^{2 }\ \ \text{where}\ \ \rho(\mathbf{r},n)=\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[ \mathbf{z}_{n}]\right) \tag{5}\]
The above approach is very similar to (4). Here, \(\mathbf{b}_{n}\), \(\mathcal{A}_{n}\), and \(\mathbf{z}_{n}\) are the grouped k-space data, the corresponding forward operator, and the centroid of the cluster, respectively. Note that once \(N\to T\), both approaches become equivalent. In this work, we used \(N=30\). Once the training is done, one can still generate the real-time images as \(\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[\mathbf{z}_{t}]\right)\).
### Motion resolved 5D reconstruction for comparison
We compare the proposed approach against a compressed sensing 5D reconstruction. In particular, we used the SI navigators to bin the data into 16 bins, consisting of four cardiac and four respiratory phases as described. We use a total variation regularization similar to [2] to constrain the reconstructions. We determined the regularization parameter manually to obtain the best reconstructions.
We note that the dataset with 6.15 minute acquisition is a highly undersampled setting. In addition, because this dataset was not acquired with intermittent fat saturation pulses, it suffers from streaking artifacts that corrupt the reconstructions.
## 3 Results
We show the results from the two normal volunteers in Fig. 3 and 4, respectively. The images correspond to 2-D slices extracted from the 3D volume, corresponding to different cardiac and respiratory phases. We also show the time profile of the real-time reconstructions \(\rho(\mathbf{r},t)=\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[ \mathbf{z}_{t}]\right)\) along the red line shown in the top row. We note that the approach can capture the cardiac and respiratory motion in the data. The different phase images shown in the figure were extracted manually from the real-time movies.
Figure 1: Overview of the proposed reconstruction algorithm. In the first step shown in (a), we estimate the latent variables that capture the motion in the data using a constrained auto-encoder, as described in Fig. 3. The auto-encoder minimizes a cost function, which is the sum of an \(\ell_{1}\) data consistency term and a prior involving cardiac and frequency ranges. To reduce the computational complexity of the image reconstruction, we cluster the latent space using k-means algorithm as shown in (b). The cluster centers are fed in as inputs to the CNN denoted by \(\mathcal{G}_{\theta}\), which outputs the deformation maps \(\mathcal{G}_{\theta}[\mathbf{z}_{n}]\). We jointly optimize for both the template \(\eta\) and parameters \(\theta\) of the generator.
Figure 3: Results from the first subject. (a) The top row shows a 2-D slice of the reconstructed 3D volume at diastole and systole, obtained using the proposed motion compensated approach. The bottom row shows the motion-resolved compressed sensing recovery of the same data. (b) shows the 1D projection versus time profile of the reconstructed datasets using the motion compensated (top) and motion resolved (bottom) approaches.
Figure 2: Latent vectors estimated from the SI navigators (bottom curves)[11]. We note that the orange and the green curves estimated using the auto-encoder roughly follow the respiratory motion, while the blue curves capture the cardiac motion.
## 4 Discussion
The comparisons in Fig. 3 and 4 show that the proposed approach is able to offer improved reconstructions, where the cardiac phases are well-resolved. We note that the motion resolved reconstruction of the different phases have different image quality, depending on the number of spokes in the specific phases. By contrast, the proposed motion compensated reconstructions are able to combine the data from different motion states; the improved data efficiency translates to reconstructions with reduced streaking artifacts. Additionally, the auto-encoder accurately characterized the SI navigator and disentangled the cardiac and respiratory latent vectors Fig. 2.
We note that the comparison in this work is preliminary. The main focus of this work is to introduce the proposed motion-compensated reconstruction algorithm and the auto-encoder approach to estimate the latent vectors and to demonstrate its utility in 5D MRI. In our future work, we will focus on rigorous studies, including comparisons with 2D CINE acquisitions.
Figure 4: Results from the second subject. (a) The top row shows a 2-D slice of the reconstructed 3D volume at diastole and systole, obtained using the proposed motion compensated approach. The bottom row shows the motion-resolved compressed sensing recovery of the same data. (b) shows the 1D projection versus time profile of the reconstructed datasets using the motion compensated (top) and motion resolved (bottom) approaches. |
2309.07073 | The arions generation by magnetodipole waves of pulsars and magnetars in
a constant magnetic field | The influence of the gravitational fields of pulsars and magnetars on the
arion emission during the propagation of magnetodipole waves in a constant
magnetic field has been evaluated.
The solution of the equation was obtained and the flux of arions emitted by
magnetodipole waves during their propagation in a constant magnetic field was
found. It is shown that the amplitude of the born arion wave at a distance from
the source of magnetodipole radiation of a pulsar or magnetar $(r\to\infty)$ in
the considered case tends to a constant value. The intensity of the arion
emission in the solid angle element and the amount of arion energy
$\overline{I}$, emitted in all directions per unit time grow quadratically with
increasing distance, traveled by the magnetodipole radiation of a pulsar or
magnetar in a constant magnetic field.
Such growth of the energy of the born arion wave is due to the fact that in
the considered problem constant magnetic field is defined in the whole space.
In reality, the galactic and intergalactic magnetic fields can be represented
in this form only in regions of space of finite dimensions, outside of which
the force lines of their induction vector are curved. Therefore, it is possible
to apply these results only in a region of space for which $r\leq
L_{coh}<\infty$, where $L_{coh}$ is the coherence length, the distance at which
the force lines of the induction vector can be considered as straight lines. An
estimate for the value of the coupling constant of photons with arions is
obtained. | V. I. Denisov, G. A. Dantsev, V. I. Priclonsky, I. P. Denisova, O. N. Gavrish | 2023-09-09T21:25:29Z | http://arxiv.org/abs/2309.07073v1 | # The arions generation by magnetodipole waves
###### Abstract
The influence of the gravitational fields of pulsars and magnetars on the arion emission during the propagation of magnetodipole waves in a constant magnetic field has been evaluated.
The solution of the equation was obtained and the flux of arions emitted by magnetodipole waves during their propagation in a constant magnetic field was found. It is shown that the amplitude of the born arion wave at a distance from the source of magnetodipole radiation of a pulsar or magnetar (\(r\rightarrow\infty\)) in the considered case tends to a constant value. The intensity of the arion emission in the solid angle element and the amount of arion energy \(\overline{I}\), emitted in all directions per unit time grow quadratically with increasing distance, traveled by the magnetodipole radiation of a pulsar or magnetar in a constant magnetic field.
Such growth of the energy of the born arion wave is due to the fact that in the considered problem constant magnetic field is defined in the whole space. In reality, the galactic and intergalactic magnetic fields can be represented in this form only in regions of space of finite dimensions, outside of which the force lines of their induction vector are curved. Therefore, it is possible to apply these results only in a region of space for which \(r\leq L_{coh}<\infty\), where \(L_{coh}\) is the coherence length, the distance at which the force lines of the induction vector can be considered as straight lines. An estimate for the value of the coupling constant of photons with arions is obtained.
## 1. Introduction
In the scientific literature of recent years, the processes of photoproduction of various axion-like particles beyond the Standart Model: arions [1,2] axions [3-6], and dilatons [7-10] are actively discussed. These processes are currently regarded as the most realistic processes, with the help of which it is supposed [11-13] to carry out registration of axion-like particles in laboratory and astrophysical conditions.
The arion is a strictly massless pseudoscalar Goldstone particle \(a\), which was introduced in 1982, in the papers [14-16] of Prof. A. A. Anselm and his co-authors.
The density of the Lagrangian function for the arion field, which is interacting with the electromagnetic field, is usually written in the canonical form:
\[L=\frac{\sqrt{-g}}{2}g^{nm}\frac{\partial a}{\partial x^{n}}\frac{\partial a} {\partial x^{m}}-\frac{\sqrt{-g}}{16\pi}F_{nm}F^{nm}-\frac{g_{a\gamma}\sqrt{- g}}{4}F_{nm}\tilde{F}^{nm}a, \tag{1}\]
where \(a\) is the pseudoscalar field of the arion, \(g_{a\gamma}\)- coupling constant of the arion with the electromagnetic field, \(F_{nm}\) - electromagnetic field tensor, \(g\) is the determinant of the metric tensor, \(\tilde{F}^{nm}=E^{mnik}F_{ik}/2\) and \(E^{nmik}=e^{nmik}/\sqrt{-g}\) - the axial absolutely antisymmetric Levi-Civita tensor, and \(e^{nmik}\) is the axial absolutely antisymmetric Levi-Civita symbol, and \(e^{0123}=+1\).
When studying the processes of arion generation under astrophysical conditions the influence of the gravitational field, in general, cannot be neglected. Therefore, first of all, let us estimate the magnitude of this influence and the size of the region of space in which this influence can be significant. Since the distribution of matter in neutron stars is close to spherically symmetric, then we will use the Schwarzschild solution as the metric tensor of the pseudo-Riemannian space. In the paper [18] it is shown, that the Schwarzschild solution can be the external solution for a non-spherically symmetric distribution of matter.
The most convenient coordinates for writing down this solution are isotropic coordinates [17]. The nonzero components of the metric tensor in these coordinates have the form:
\[g_{00}=\frac{(4r-r_{g})^{2}}{(4r+r_{g})^{2}},\qquad g_{xx}=g_{yy}=g_{zz}=-(1+ \frac{r_{g}}{4r})^{4}, \tag{2}\]
where \(r_{g}\) is the Schwarzschild radius and the notation is used for convenience of reference: \(r=\sqrt{x^{2}+y^{2}+z^{2}}\).
Let us estimate the value of the ratio \(r_{g}/r\), included in expressions (2), on the surface of a neutron star. The radius of a neutron star in recent times [19] is taken to be \(R_{s}=10\) km, and the mass \(M\) in the interval from 0.1 to 1.0 solar masses.
Thus, in our problem, the ratio \(r_{g}/R_{s}\sim 0.01\). Since at \(r>R_{s}\) the ratio \(r_{g}/r\) takes smaller values, than the ratio \(r_{g}/R_{s}\), the influence of the gravitational field is small and limited to a small neighborhood \(r\leq 100R_{s}\) of the neutron star. Therefore, as a first approximation for the small parameter \(r_{g}/r\) in our problem as a metric tensor we will use the metric tensor of the Minkowski space: \(g_{00}=1,\ g_{11}=g_{22}=g_{33}=-1\).
The equations of the arion and electromagnetic fields derived from the Lagrangian density (1), in the Minkowski space have the form:
\[\hbox{\hbox to 0.0pt{\vrule height 6.0pt width 0.4pt depth -0.4pt\kern-0.4pt \vrule height 6.0pt width 0.4pt depth -0.4pt\kern-0.4pt\hrulefill\kern-0.4pt\hbox{\vrule height 6.0pt width 0.4pt depth -0.4pt\kern-0.4pt\hrulefill \kern-0.4pt\hbox{\vrule height 6.
where \({\bf B}\) is the magnetic field induction vector, \({\bf E}\) is the electric field strength vector.
According to the first equation (3), the source of arions are electromagnetic fields and waves, in which the first invariant of the electromagnetic field tensor is different from zero. In the nature there exist such configurations of electromagnetic fields and waves at which this invariant is different from zero in large, by earthly standards, volumes of space. These are, for example, magnetodipole radiation of pulsars and magnetars, propagating in the constant galactic or intergalactic magnetic field. And although the induction of galactic and intergalactic magnetic fields is relatively small at \(B\sim 10^{-6}\) Gs, the volumes occupied by these fields are significant. Therefore, it is of undoubted interest to study this process. Let us consider it in detail.
Calculation of arion emission arising from the propagation of magnetodipole waves of a pulsar or a magnetar in a constant magnetic field
Suppose that at a point with radius vector \({\bf r}={\bf r}_{0}=\{x_{0},y_{0},z_{0}\}\) is a pulsar or magnetar with a magnetic dipole moment \({\bf m}\), which rotates at a frequency of \(\omega\) around an axis that makes an angle of \(\psi\) with the vector \({\bf m}\). Then this source emits an electromagnetic wave of frequency \(\omega\), whose components [8] have the form:
\[{\bf B}({\bf R},\tau)=\frac{3({\bf m}(\tau)\cdot{\bf R}){\bf R}-R^{2}{\bf m}( \tau)}{R^{5}}-\frac{\dot{\bf m}(\tau)}{cR^{2}}+ \tag{4}\]
\[+\frac{3(\dot{\bf m}(\tau)\cdot{\bf R}){\bf R}}{cR^{4}}+\frac{(\ddot{\bf m}( \tau)\cdot{\bf R}){\bf R}-R^{2}\ddot{\bf m}(\tau)}{c^{2}R^{3}},\]
\[{\bf E}({\bf R},\tau)=\frac{({\bf R}\times\dot{\bf m}(\tau))}{cR^{3}}+\frac{({ \bf R}\times\ddot{\bf m}(\tau))}{c^{2}R^{2}},\]
where \({\bf R}={\bf r}-{\bf r}_{0}\), the dot above the vector means the derivative on retarded time \(\tau=t-R/c\), and the pulsar or the magnetar magnetic dipole moment in the following task has the components:
\[{\bf m}(\tau)=|{\bf m}|\{\cos(\omega\tau)\sin\psi,\ \sin(\omega\tau)\sin\psi,\ \cos \psi\}. \tag{5}\]
In the coordinate system, the origin of which is placed in the point \({\bf r}={\bf r}_{0}\), the directional diagram of electromagnetic radiation (4) has the form
\[\overline{\left(\frac{dI}{d\Omega}\right)}_{EMW}=\frac{[{\bf r}\ddot{\bf m}]^ {2}}{4\pi c^{3}r^{2}}.\]
Integrating this expression over the angles \(\theta\) and \(\varphi\), we obtain the total intensity of radiation of a pulsar or a magnetar:
\[\overline{I}_{EMW}=\frac{2\ddot{\bf m}^{2}}{3c^{3}}=\frac{2ck^{4}{\bf m}^{2} \sin^{2}\psi}{3}. \tag{6}\]
As shown in [2,8], the electromagnetic wave (4) can emit arions and dilatons with frequencies \(\omega\) and \(2\omega\).
Let there is a constant magnetic field in the considered region of space
\[{\bf B}_{0}=\{B_{0x},B_{0y},B_{0z}\}. \tag{7}\]
Substituting expressions (4) and (7) into equation (3) and considering that
\[{\bf E}({\bf r},\tau)=\frac{({\bf R}\times\dot{\bf m}(\tau))}{cR^{3}}+\frac{({\bf R }\times\ddot{\bf m}(\tau))}{c^{2}R^{2}}=rot_{{\bf r}_{0}}\frac{\dot{\bf m}(\tau) }{cR},\]
where \(rot_{{\bf r}_{0}}\) is the rotor taking operator on coordinates of the vector \({\bf r}_{0}=\{x_{0},y_{0},z_{0}\}\), let's put it in the form
\[\begin{array}{c}{\hbox{\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0 pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.
## 3. Angular distribution of arion radiation
Angular distribution of the arion emission, arising from the propagation of the electromagnetic wave (4) of a pulsar or magnetar through a permanent magnetic field (7), following the paper [8], we write it in the form:
\[\frac{dI}{d\Omega}=r({\bf r\ W}), \tag{13}\]
where \({\bf W}\) is the energy flux density vector associated with the components of the energy-momentum tensor \(T^{nk}\) by the relation \(W^{\beta}=cT^{0\beta}\). Using the expression for the energy-momentum tensor of the free arion field
\[T^{nk}=g^{np}g^{km}\{\frac{\partial a}{\partial x^{p}}\frac{\partial a}{ \partial x^{m}}-\frac{1}{2}g^{nk}\frac{\partial a}{\partial x^{p}}\frac{ \partial a}{\partial x^{m}}g^{pm}\},\]
from expression (13) we obtain:
\[\frac{dI}{d\Omega}=-cr^{2}\frac{\partial a}{\partial r}\frac{\partial a}{ \partial x^{0}}.\]
Substituting into this relation the expression (12) for the arion field and leaving asymptotically the main part, after time averaging, we reduce it to the form:
\[\frac{\overline{dI}}{d\Omega}=\frac{g_{a\gamma}^{2}c|{\bf m}|^{2}k^{4}r^{2} \sin^{2}\psi}{8}\Big{\{}(B_{0x}^{2}+B_{0y}^{2})\cos^{2}\theta+B_{0z}^{2}\sin ^{2}\theta- \tag{14}\]
\[-2B_{0z}\sin\theta\cos\theta[B_{0x}\cos\varphi+B_{0y}\sin\varphi]\Big{\}}.\]
Let us now find the amount of arion energy \(\overline{I}\) emitted in all directions per unit time:
\[\overline{I}_{AR}=\int\limits_{0}^{\pi}\sin\theta d\theta\int\limits_{0}^{2 \pi}d\varphi\frac{\overline{dI}}{d\Omega}=\frac{\pi g_{a\gamma}^{2}c|{\bf m}| ^{2}k^{4}r^{2}\sin^{2}\psi}{12}\Big{\{}B_{0x}^{2}+B_{0y}^{2}+2B_{0z}^{2}\Big{\}}. \tag{15}\]
The different dependence of this result on the components of the constant magnetic field (7) is due to the fact that the directivity diagram of the magnetodipole radiation of a pulsar or a magnetar is not spherically symmetric.
## 4. Conclusion
It follows from expression (12) that the amplitude of the born arion wave at a distance from the magnetodipole source of the pulsar or magnetar (\(r\rightarrow\infty\)) in the considered case tends to a constant value. The intensity of the arion emission in the solid angle element and the amount of arion energy \(\overline{I}\), emitted in all directions per unit time grow quadratically with increasing distance, traveled by the magnetodipole radiation of a pulsar or magnetar in a constant magnetic field (7).
Such growth of the energy of the born arion wave is due to the fact that in our problem the constant magnetic field (7) is set in the whole space. In reality, the galactic and intergalactic magnetic fields can be represented in the form (7) only in regions of finite dimensions, outside of which the force lines of their induction vectors are curved. Therefore, one can apply the results (12), (14) and (15) only in the region of space, for which \(r\leq L_{coh}<\infty\), where \(L_{coh}\) is the coherence length - the distance at which the lines of force of the induction vector can be considered as straight lines.
Let us define the conversion factor of electromagnetic radiation energy into arion energy \(\beta\) as the ratio of the intensity of radiation of arions \(\overline{I}_{ar}\) to the intensity of the electromagnetic field of a pulsar or magnetar \(\overline{I}_{EMW}\):
\[\beta=\frac{\overline{I}_{AR}}{\overline{I}_{EMW}}=\frac{\pi L_{cog}^{2}g_{a \gamma}^{2}}{8}\Big{\{}B_{0x}^{2}+B_{0y}^{2}+2B_{0z}^{2}\Big{\}}.\]
This coefficient reflects the properties of the converter, i.e., the electromagnetic field. It should be noted, that according to the second equation of the system (3), along with the conversion of the energy of electromagnetic waves into the energy of arions, the opposite process takes place. If \(\beta<<1\), then the reverse process can be ignored; if the coefficient of \(\beta\) is close to unity, it is necessary to consider both processes together.
At present the value of the photon-arion coupling constant \(g_{a\gamma}\) is unknown. Let's roughly estimate the value of this constant using the example of the generation of arions by the electric field of pulsar radiation as it propagates through the magnetic field of our Galaxy.
According to modern data [20,21], the radius of the Galaxy is 16 kiloparsecs. It is accepted to distinguish the large-scale component of the magnetic field of the Galaxy magnetic field of the Galaxy (the scale of homogeneity of the order of hundreds and thousands of parsecs) and fluctuation component with a wide range of scales (from fractions of parsecs to hundreds of parsecs). The induction of the large-scale magnetic field of the Galaxy is estimated to be 2-3 \(\mu\) Gs.
Based on these data, it is reasonable to put \(L_{coh}\sim 10^{3}\) ps \(=3\cdot 10^{21}\) cm, \(B_{0}\sim 10^{-6}\) Gs.
Then from the condition \(\beta<<1\) we obtain: \(g_{a\gamma}^{2}<8\cdot 10^{-31}\)\(\frac{cm}{erg}\). Equation (15) was derived by us into the Gaussian system of units of measurement. Let us rewrite this formula in the natural system of units. If we take into account that 1 erg= 624 GeV, and 1 cm \(=0.5\cdot 10^{14}\) GeV\({}^{-1}\), then we get: \(g_{a\gamma}<0.9\cdot 10^{-10}\) GeV\({}^{-1}\). This estimate coincides with the estimates of the coupling constant \(g_{a\gamma}\) obtained in [2,22-25].
## Acknowledgements
This study was conducted within the scientific program of the National Center for Physics and Mathematics, section \(\#5<<\)Particle Physics and Cosmology\(>>.\) Stage 2023-2025.
|
2301.13624 | A Kubernetes-Based Edge Architecture for Controlling the Trajectory of a
Resource-Constrained Aerial Robot by Enabling Model Predictive Control | In recent years, cloud and edge architectures have gained tremendous focus
for offloading computationally heavy applications. From machine learning and
Internet of Thing (IOT) to industrial procedures and robotics, cloud computing
have been used extensively for data processing and storage purposes, thanks to
its "infinite" resources. On the other hand, cloud computing is characterized
by long time delays due to the long distance between the cloud servers and the
machine requesting the resources. In contrast, edge computing provides almost
real-time services since edge servers are located significantly closer to the
source of data. This capability sets edge computing as an ideal option for
real-time applications, like high level control, for resource-constrained
platforms. In order to utilize the edge resources, several technologies, with
basic ones as containers and orchestrators like Kubernetes, have been developed
to provide an environment with many features, based on each application's
requirements. In this context, this works presents the implementation and
evaluation of a novel edge architecture based on Kubernetes orchestration for
controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle
(UAV) by enabling Model Predictive Control (MPC). | Achilleas Santi Seisa, Sumeet Gajanan Satpute, George Nikolakopoulos | 2023-01-31T13:32:36Z | http://arxiv.org/abs/2301.13624v1 | A Kubernetes-Based Edge Architecture for Controlling the Trajectory of a Resource-Constrained Aerial Robot by Enabling Model Predictive Control
###### Abstract
In recent years, cloud and edge architectures have gained tremendous focus for offloading computationally heavy applications. From machine learning and Internet of Thing (IOT) to industrial procedures and robotics, cloud computing have been used extensively for data processing and storage purposes, thanks to its "infinite" resources. On the other hand, cloud computing is characterized by long time delays due to the long distance between the cloud servers and the machine requesting the resources. In contrast, edge computing provides almost real-time services since edge servers are located significantly closer to the source of data. This capability sets edge computing as an ideal option for real-time applications, like high level control, for resource-constrained platforms. In order to utilize the edge resources, several technologies, with basic ones as containers and orchestrators like Kubernetes, have been developed to provide an environment with many features, based on each application's requirements. In this context, this works presents the implementation and evaluation of a novel edge architecture based on Kubernetes orchestration for controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle (UAV) by enabling Model Predictive Control (MPC).
Robotics; Edge Computing; Kubernetes; UAV; MPC.
## I Introduction
Nowadays, as technology is progressing and the need for computational resources is continuously increasing, different computation layers have been evolved. We can differ these layers into four distinct categories, cloud, fog, edge and devices, while each one of them has its own different characteristics and utilization. At the same time, all of them can be combined with each other to create an ecosystem for the utilization of external computational resources. As these technologies mature, researchers and engineers use them more and more to offload their applications, due to the capabilities and features they provide [1]. Additionally, since the mentioned computation layers have attracted tremendous focus, several state-of-the-art technologies have been developed and are promising to revolutionize many technological fields, like containerized applications and containers' orchestrators. In this framework, robotics can take a huge advantage of the external resources and many resource-constrained platforms can make the most out of them, since they will be able to run algorithms that they can not run on their onboard processors. In this context, edge is emerging since it can provide tremendous resources for enhancing the performance, and the overall efficiency of autonomous operations, and at the same time minimize the travel time delays when transmitting data from UAVs, like in [2] and [3]. Thus, edge can be established as a promising solution for time critical operations, like offloading computationally costly controllers, of resource-constrained platforms. In this article, we propose an architecture where we offload the Model Predictive Control method, which is a relatively heavy controller, to the edge, as a containerized application, and we use Kubernetes for managing the containers.
Researchers seek to utilize the advantages of edge computing for the benefit of robots. However, researchers have to overcome some limitations and challenges, in order to use these technologies universally. In [4] an architecture consisting of all four computation layers is used to offload the localization and mapping problem from robots. in this case, edge is operating as a layer between sensor devices, gateways, and cloud servers for enhancing the quality of services, while in [5] edge is used to design a search planner algorithm using deep learning for UAVs. In [6] edge and cloud were utilized in terms of storage and computational resources for deep robot learning including object recognition, grasp planning and localization of the computational.
Some works that utilized edge for robotic applications by implementing Kubernetes or container based architectures can be summarized as it follows. In [7] researcher tried to automate, by using Kubernetes orchestration, the process of making decision in terms of placement of the expected workload to edge, fog and cloud, for robotic applications. In [8], a methodology based on docker and Kubernetes for ROS-based robotic application is presented. In that case, the architecture was evaluated by experimental results obtained by a mobile robot interacting with an industrial agile production chain.
In the works mentioned above, the approach regarding edge computing is mainly towards non-time critical tasks. High level controllers must operate almost real-time. In [9] an architecture was proposed where the control method consists of the combination of a LQR running on the device and an MPC running the optimization both on edge and cloud, while in [10] two complimentary MPCs are running, one on a local edge and one on the cloud. In comparison to our proposed work, these articles are partly offloading the MPC method on the edge and are focused on evaluating the system in terms of related latency, and the related uncertainty for several cases.
The motivation behind this work is to fill the gap regarding edge enabled robotics. Even though edge comput
ing has proven to be a promising technology to expand the autonomous capabilities of resource-constrained robotic platforms, especially when combined with 5G networks, the research that has been done around this area is relatively limited. Despite the fact that the great advantage of edge computing is the ability of enabling almost real-time operation by offloading the computing process on the edge, most researchers have focused on utilizing edge for offline procedures. Thus, the contribution of this article is to present a novel edge architecture for enabling the time sensitive operation of controlling the trajectory of a resource-constrained UAV in real-time through MPC. Control is one of the basic components of autonomy, thus the performance and efficiency is the main criteria when choosing a controller. Model predictive controllers are widely used on UAVs due to their characteristics and optimal behavior, but they are computationally costly, thus some UAVs, deployed with light processors, like Raspberry Pi, can not handle them. By utilizing the proposed architecture, we will be able to use edge resources in order to offload the MPC and control resource-constrained platforms by closing the loop over the edge. Additionally, we are using Kubernetes orchestration that provides best practices for cloud and edge applications but inserts some challenges that we have to overcome.
The rest of the article unfolds in the following Sections. In Section II, we describe the Kubernetes-based edge architecture, while in Section III, we give a brief overview of the UAV and MPC model. In Section IV, we present the simulation results of the edge architecture in terms of time delays and performance. Finally, in Section V, we conclude the article by highlighting the main points of this work, and we propose future directions.
## II Kubernetes Edge Architecture
The proposed architecture is based on Kubernetes. Kubernetes is a container orchestrator, developed by Google. Before we start analyzing the Kubernetes-based architecture, we have to describe the containers developed for this work. Afterward, we are going to present the system's architecture and the Robotic Operating System (ROS) framework that was utilized for the UAV-MPC system. Finally, we will describe the communication layer and network.
Containers are based on software that creates an operating environment and are deployed only with the necessary and chosen packages, libraries and dependencies, in order to run a specific application. The application running in this form is called a containerized application. Containers are based on images that are the nominal state of containers before they get deployed. An image can be used to deploy many containers. For our system, we deployed two docker containers. One container is responsible for running the controller and all the necessary libraries and dependencies for its smooth and reliable operation, and the other is responsible for running the ROS master, which takes care of the communication between the ROS nodes. To deploy the two docker containers, we had to developed two different docker images. For both images, we used ROS Noetic on Ubuntu 20.04 entrypoint, and we built on top of them. For the first image, we included several ROS packages and libraries, as well as an optimization engine for the MPC containerized application, while for the second image we just needed to run the ROS master. For a more complex application, we could split it into more containers, each one of them would be assigned a specific task.
Once we had developed the docker images, we were able to deploy the docker containers inside the Kubernetes cluster. We decided to use Kubernetes due to the features it provides for our containers. Kubernetes gives us the capability to manage our containers and automates the whole process of deploying the containers, assign them resources, check their health. The services and features that Kubernetes is providing can be extremely helpful for our application, since they give us the chance to manage and monitor our system in an optimal way, and it can even more handful when we have to deploy more containers and the system get more and more complex. The Kubernetes architecture is depicted in Fig. 1. The top part of the Kubernetes cluster consists of four components that make the master node. These are the kube-apiserver that exposes the Kubernetes Application Programming Interface (API), the etcd that is used as the backing store for all cluster data, the kubescheduler that watches for newly created pods with no assigned node, and selects a node for them to run, and finally the kubecontroller that runs the control processes. Besides the master node, we have the worker nodes. In our case, we have only one worker node, inside which we have deployed our containers
Fig. 1: Diagram of the Kubernetes-based edge architecture for the UAV-MPC system
in the form of pods. A pod is the basic operational unit of Kubernetes and consist of a set of containers that share storage, network resources, and specifications on how to run the containers. The two pods we have deployed are related to the ROS master and the MPC respectively. Apart from the pods, the worker node consists of the kubelet which makes sure that containers are running in a pod, and kube-proxy which makes sure that network rules are met.
From Fig. 1 we can describe the block diagram of the close loop system. Let's assume that in the time step, \(k\) the UAV dynamics node generates a signal \(x(k)\) that describes the states of the UAV. These states are the position, velocity, and orientation of the UAV. This signal will arrive at the MPC ROS node, running on the edge, delayed, due to the travel time the signal needs to travel from the UAV to the edge. Thus, the signal carrying the information of \(x(k)\) will arrive at the MPC ROS node as \(x(k-d_{1})\), while at the same time, another signal regarding the desired states for the UAV will arrive at the MPC ROS node as a reference signal \(r(k)\). The controller will have to process this information and generate the command signal \(u(k-d_{2})\). Given that \(u(k-d_{2})\) is corresponding to the signals \(x(k-d_{1})\) and \(r(k)\), the variable \(d_{2}\) is related to \(d_{1}\), as well as the execution time of the MPC. This command signal has to travel from the edge to the UAV in order to close the loop of the system. Thus, the signal arriving to the UAV is denoted as \(u(k-d_{3})\), where \(d_{3}\) is related to \(d_{1}\), \(d_{2}\), as well as to the travel time the command signal needs to travel from the edge to the UAV. Finally, the output of the system is denoted as \(y(k)\).
The communication between the UAV model simulation and the controller is taken care by ROS. There should be only one ROS master, and every ROS node has to register to that ROS master to be able to run and communicate with other ROS nodes. When two ROS nodes want to exchange data by subscribing and publishing to the same ROS topic, ROS master opens a random port and allows the two ROS nodes to communicate through that port. Once ROS assigns a random ports, different every time, the nodes running on the edge and the nodes running on the robot try to communicate with each other through these ports. Since the containers are deployed on the Kubernetes cluster of the edge machine (host), we have to specify which ports the containers should be exposed to for communication purposes. The challenge occurs because ROS master do not assign specific ports for communication, but it assigns them randomly. To overcome this issue, we used the host network option when we deployed the containers on the Kubernetes cluster, in order to expose all the host ports to the containers and vice versa. That way, the containers can access all the traffic at the host machine's ports and the host machine can access the traffic at the containers' ports. Now, the data coming from the UAV to the edge machine can be forwarded inside the containers and the data from the containerized applications can be exposed to the edge machine and then sent to the UAV.
In this paper, both the edge machine and the UAV are on the same network, thus we were able to use Wi-Fi. Wi-Fi can be an efficient network option for the communication between the UAV and the edge machine and has been used widely, but it is not the optimal solution. 5G is a promising technology that will provide essential features for secure, robust and reliable networking, and can be the field of study for future works.
## III Model Predictive Control
Model predictive control is a standard method used for high level control for UAVs, thus there are many works describing in detail the behavior of the controller and the kinematics of the UAV, like in [11], where authors suggested a UAV model that could afford disturbances by stabilizing its location in space. The preference on MPC in comparison to other common controllers, like PID or LQR, is explained by its predictive behavior and performance. Based on these characteristics, we were prompted to use this controller for controlling the trajectory of an UAV, and we were motivated to offload it to the edge so resource-constrained UAVs and robots in general, that can not afford to run this controller onboard, would be able to take advantage of the benefits of MPC. The UAV model and the implementation of the MPC for this work are based on [12].
### _UAV Model_
In order to develop the MPC methodology, the first step is to describe the UAV kinematics model, which is presented through the Eq. 1.
\[\dot{p}(t) =v_{z}(t) \tag{1}\] \[\dot{v}(t) =R_{x,y}(\theta,\phi)\begin{bmatrix}0\\ 0\\ T\end{bmatrix}+\begin{bmatrix}0\\ 0\\ -g\end{bmatrix}-\begin{bmatrix}A_{x}&0&0\\ 0&A_{y}&0\\ 0&0&A_{z}\end{bmatrix}u(t)\] \[\dot{\phi}(t) =\frac{1}{\tau_{\phi}}(K_{\phi}\phi_{d}(t)-\phi(t))\] \[\dot{\theta}(t) =\frac{1}{\tau_{\theta}}(K_{\theta}\theta_{d}(t)-\theta(t)),\]
where \(p=[p_{x},p_{y},p_{z}]^{T}\) and \(v=[v_{x},v_{y},v_{z}]^{T}\) are the position and the linear velocity respectively based on the
Fig. 2: Coordinate frames, where \(\mathbb{W}\) and \(\mathbb{B}\) represent the world and body coordinate frames respectively on gazebo simulation environment
world frame (\(\mathbb{W}\)), as depicted in Fig. 2. We donate as \(R(\phi(t),\theta(t))\in SO(3)\) the rotation matrix that represents the attitude. \(\phi\) and \(\theta\in[-\pi,\pi]\) are the roll and pitch angles, while \(T\geq 0\) describes the total thrust. The acceleration depends on the magnitude and angle of the thrust vector, the gravity, and the linear damping terms \(A_{x},A_{y},A_{z}\in R\)\(g\). \(\phi_{d}\) and \(\theta_{d}\in R\) are the desired roll and pitch inputs with gains \(K_{\phi}\) and \(K_{\theta}\in R\), and time constants \(\tau_{\phi}\) and \(\tau_{\theta}\in R\).
### _Cost Function_
Next step for the MPC methodology is to present the cost function. \(x=[p,v,\phi,\theta]^{T}\) and \(u=[T,\phi_{d},\theta_{d}]^{T}\) represent the UAV's state vector and the control input, respectively. The sampling time of the system is \(\delta_{t}\in\mathbb{Z}^{+}\), while the forward Euler method is used for each time instance \((k+1|k)\). The predictive behavior of the MPC is based on the prediction horizon, which considers a specified number of steps into the future, and is represented as \(N\).
In order to minimize the cost of the cost function, an optimizer has been assigned to find the optimal set of control actions. The cost function associates the cost of the configuration of states and inputs at the current time and in the prediction. \(x_{k+j|k}\) represents the predicted states at the time step \(k+j\), produced at the time step \(k\), while \(u_{k+j|k}\) represents the corresponding control actions. Furthermore, \(x_{k}\) represents the predicted states and \(u_{k}\) represents the corresponding control inputs along the prediction horizon. The equation describing the cost function is presented in Eq. 2.
\[J=\sum_{j=1}^{N}\underbrace{(x_{d}-x_{k+j|k})^{T}Q_{x}(x_{d}-x_{k +j|k})}_{state\
different trajectory. The blue line represents the real trajectory of the UAV, while the blue line represents the reference points for the desired trajectory of the UAV. From these figures, we can notice that the UAV simulation model can successfully follow the desired trajectory. The time delays seem to not have a significant effect on the performance of the controller. On the next figures, we are investigating in more detail these time delays.
Fig. 4 depicts the Euclidean error between the UAV position and the reference point for each time step of the Kubernetes-based architectures, for the circular, spiral and helical trajectories. The blue line represents the error and the red line represents the error tolerance. The controller is responsible to keep the error below the tolerance value. If the error goes above the tolerance, the controller will correct it and the UAV will continue following the desired trajectory. The tolerance was set at 0.4 meters for each axis, thus in total of \(\sqrt{0.68}\) meters.
In Fig. 5, the deviation of the different types of time delays for the spiral trajectory are presented. In the left figure, the deviation for the travel time of a signal from the UAV to the edge, in the middle figure the deviation for the execution time of the MPC, and in the right figure the deviation for the travel time of a signal from the edge to the UAV, are depicted. The average measured travel time from the UAV to the edge is 0.0089 seconds, and the maximum 0.1700 seconds. For the execution time, the average measured time is 0.0141 seconds and the maximum is 0.2200 seconds. Finally, for the travel time, from the edge to the UAV, the measured travel time is 0.0161 seconds and the maximum is 0.2600 seconds.
To end the evaluation of the system, we measured the resource usage for the execution of the MPC on the edge and the data are depicted in Fig. 6. The red bars represent the time the CPU spends executing processes in user-space (us). Similarly, the blue bar represents the time spent on running system kernel-space (sy) processes. From the figure we can observe that by utilizing the edge machine, the edge does not get overloaded, and the maximum reached value is 84.50% which occurs when the values us and sy are 46.70% and 37.80% respectively. The maximum values that us and sy reach independently are 54.40% and 37.80% respectively, and their average values are 20.225% for the us and 4.582
Fig. 4: Euclidean error between UAV position and reference point for each time step of the Kubernetes-based architectures, for A) the circular, B) spiral, and C) helical trajectory. The blue line represents the error and the red line represents the error tolerance
Fig. 5: Deviation of the different types of time delays for the spiral trajectory: A) Deviation for the travel time of a signal from the UAV to the edge. B) Deviation for the execution time of the MPC. C) Deviation for the travel time of a signal from the edge to the UAV.
Fig. 6: Edge resources usage during the spiral trajectory. The red bar represents the user space and the blue bar represents the system kernel-space.
for the sy. From these measurements and figure, we can notice that the relatively immense assigned edge resources are adequate in order to run the computationally demanding controller, but even in this case, during the \(35^{th}\) second of the trajectory, the usage of resources were almost at \(90\%\). This means that computational light units, like UAVs' onboard processors, might not be able to execute that controller smoothly.
## V Conclusions and Future Work
In this work, we presented a novel edge architecture to control the trajectory of an UAV through the edge by enabling an MPC methodology. This architecture can be beneficial for expanding the computational capabilities of resource-constrained platforms like aerial robots, that in many cases are deployed with light microprocessors onboard, like Raspberry Pi, and can not afford to run computationally expensive processes onboard. By utilizing edge, we were able to offload the controller there, and control the trajectory of the UAV in real-time by closing the loop of the system through the edge. Furthermore, we evaluated the proposed architecture, through a series of experiments, through which we examined the performance of the system, as well as the overall time delays.
Edge computing is a promising technology for the field of robotics. In the current article, we offloaded the computationally costly MPC, while future works can move towards offloading other time sensitive robotic application, like sensor fusion for online perception, or offload applications that require many resources in order to operate in real-time, like map merging from multiple agents. The end goal would be to create an ecosystem through which multiple agents will be able not only to use edge resources to expand their autonomy capacity, but also communicate and collaborate through the edge.
|
2309.08340 | Formalizing the $\infty$-Categorical Yoneda Lemma | Formalized $1$-category theory forms a core component of various libraries of
mathematical proofs. However, more sophisticated results in fields from
algebraic topology to theoretical physics, where objects have "higher
structure," rely on infinite-dimensional categories in place of $1$-dimensional
categories, and $\infty$-category theory has thusfar proved unamenable to
computer formalization.
Using a new proof assistant called Rzk, which is designed to support
Riehl-Shulman's simplicial extension of homotopy type theory for synthetic
$\infty$-category theory, we provide the first formalizations of results from
$\infty$-category theory. This includes in particular a formalization of the
Yoneda lemma, often regarded as the fundamental theorem of category theory, a
theorem which roughly states that an object of a given category is determined
by its relationship to all of the other objects of the category. A key feature
of our framework is that, thanks to the synthetic theory, many constructions
are automatically natural or functorial. We plan to use Rzk to formalize
further results from $\infty$-category theory, such as the theory of limits and
colimits and adjunctions. | Nikolai Kudasov, Emily Riehl, Jonathan Weinberger | 2023-09-15T11:51:40Z | http://arxiv.org/abs/2309.08340v3 | # Formalizing the \(\infty\)-categorical Yoneda lemma
###### Abstract.
The field of category theory seeks to unify and generalize concepts and constructions across different areas of mathematics, from algebra to geometry to topology and also to logic and theoretical computer science. Formalized 1-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure," rely on infinite-dimensional categories in place of 1-dimensional categories, and \(\infty\)-category theory has thusfar proved unamenable to computer formalization.
Using a new proof assistant called Rzx, which is designed to support Riehl-Sublman's simplicial extension of homotopy type theory for synthetic \(\infty\)-category theory, we provide the first formalizations of results from \(\infty\)-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzx to formalize further results from \(\infty\)-category theory, such as the theory of limits and colimits and adjunctions.
category theory, homotopy type theory, formalization, directed type theory, \(\infty\)-category theory, Yoneda lemma, 2010 Mathematics Subject Classification: (2010) 000.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.0.00.00.00.0.00.00.0.00.00.00.00.00.00.0.00.00.00.00.0.00.00.0.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.00.00.0.00.00.00.00.00.00.00.0.00.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.0.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.0.00.00.00.00.00.00.0.00.00.00.00.00.0.00.00.00.00.00.00.00.0.00.0.00.00.00.00.0.00.00.00.00.0.00.00.00.0.00.00.00.00.0.00.00.00.00.00.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.00.0.00.00.0.00.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.0.00.00.00.00.0.00.00.0.00.00.00.0.00.00.00.00.00.00.0.00.0.00.0.00.00.0.00.00.00.00.0.00.00.00.00.00.0.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.0.00.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.00.00.00.0.00.00.00.0.000.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.0.00.00.00.00.0.00.0.00.00.0.00.00.00.0.00.0.00.00.00.00.00.00.0.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.0.00.00.0
models -- such as _quasi-categories_[18; 48], _complete Segal spaces_[85], and _Segal categories_[46; 79] -- are used at various places in the literature, and theorems are often proven "analytically," in reference to the "coordinates" of a particular model. A computer formalizer is thus faced with an unattractive choice of either
* picking one model, which must then be used for the entire library of subsequent results, or
* formalizing multiple models and the comparisons between them [50], which significantly increases the workload.1 Footnote 1: Experts in the field often prefer to work “model-independently” [9; 58] which can be done either by using \(\infty\)-category theory itself as the ambient metatheory, or deploying the formalism of \(\infty\)_-cosmoi_ (_i.e._, categories of co-categories) [92], but either approach would require some initial formalization in a specific model of co-categories.
### Reimagining the foundations of \(\infty\)-category theory
A radical-sounding alternative, which we argue is worth taking seriously, is to change the foundation system. The article "Could \(\infty\)-category theory be taught to undergraduates?" [86] argues that it is possible to narrow the gap between co-category theory and ordinary \(1\)-category theory by replacing the traditional foundations with a directed extension of _homotopy type theory_[93; 106]. The basis for this claim is the paper [89] (and the follow-up work of [11; 25; 133]), which develops the basic theory of \(\infty\)-categories in an alternative foundational framework established there. The _simplicial type theory_ is a formal framework that permits one to make the following intuitive definitions rigorous:
* A type is a _pre-\(\infty\)-category_ (aka a _Segal type_) if every composable pair of arrows has a unique composite.
* A pre-\(\infty\)-category is an _\(\infty\)-category_ (aka a _Rezk type_) if equalities are equivalent to isomorphisms.
* A type is an _\(\infty\)-groupoid_ (aka a _discrete type_) if equalities are equivalent to arrows.
* A type family is a _left fibration_ (aka a _covariant type family_) if every arrow in the base type has a unique lift with specified domain.
The intended model of this formal system is Rezk's complete Segal spaces [85], in which pre-\(\infty\)-categories correspond to Segal spaces [85; 99], \(\infty\)-categories correspond to complete Segal spaces [85], and left fibrations correspond to left fibrations [31; 52]. This extends Shulman's model of homotopy theory, in which types are interpreted as Reedy fibrant simplicial spaces [100]. The phrases "for all...there exists...unique" are meant in the standard sense of homotopy type theory [93; 106]. In particular, following the homotopical extension of the Curry-Howard correspondence [7; 47; 108], _uniqueness_ means _contractibility_ -- which is precisely what is true semantically for the composition operation in an \(\infty\)-category.2
Footnote 2: Those familiar with the Segal space model of \(\infty\)-categories may be surprised that the definition of a pre-\(\infty\)-category refers to binary sequences of composable arrows and not also composable triples and quadruples and so on. Here the binary statement subsumes the \(n\)-ary ones for \(n\geq 0\) because it is interpreted _intramally_ in the model as the assertion that the internal mapping types mapping out of the \(2\)-simplex and out of its inner horn are equivalent [89, Section 5].
More generally, Shulman has proven that homotopy type theory has semantics in any \(\infty\)-topos [88; 102] and Weinberger [115] has shown that the simplicial type theory of [89] can be interpreted in simplicial objects in any \(\infty\)-topos [64; 82; 104].
### Formalizing \(\infty\)-category theory
It is relatively standard practice in homotopy type theory to formalize results while writing the corresponding paper proofs.3 At the time of the writing of [89] it was not possible to formalize any of its results because the work is done in an _extension_ of traditional homotopy type theory, with multi-level contexts and a new type-forming operation providing _extension types_. But this is possible now thanks to the proof assistant Rzk developed by Kudasov [54]. Thus, finally one can formally test the claims made in the article [86]. This is the content of our project.
Footnote 3: While homotopy type theory cannot be formalized in Lean or Idris [22] because their kernels assume that all types are sets, contradicting Voevodsky’s _univalence axiom_, it can be done in Agda, Coq, and a growing variety of experimental proof assistants.
In SS2, we describe the simplicial type theory, and in SS3 we introduce synthetic \(\infty\)-category theory. In SS4, we describe our formalization of the \(\infty\)-categorical Yoneda lemma in Rzk. In SS6, we compare this formalization with parallel formalizations of the \(1\)-categorical Yoneda lemma in the agda-unimath [94] and mathlib libraries. In SS7, we offer a few takeaways from this formalization project and describe related future work.
### Related work
A roughly parallel synthetic framework for \(\infty\)-category theory has been proposed by Weaver and Licata using bicubical sets as an intended model [112]. An alternate approach to formalizing higher category is within the framework of _two-level type theory_, using extensional type theory as a meta-theory, see e.g. [6; 53; 109].
A conceptual discussion of the approach behind simplicial type theory with comparisons is done by Buchholtz in [23]. A self-contained overview of both syntactic and semantic aspects of simplicial type theory is given in the master's thesis of Bakke [10].
Furthermore, there has been extensive work on directed type theories [55; 76; 77; 111], though most of this was not created to describe \(\infty\)-category theory. Other work includes domain-specific languages for two-dimensional categories [3; 39] and virtual equipments [73]. There also exist other type theories capturing infinite-dimensional categorical structures.
Notable developments include [34; 36; 37; 38; 4; 19; 34]. However, these systems differ from the one that we are using in two major aspects: their setup and their purposes. Our framework features a synthetic and homotopical theory of \(\infty\)-categories with the aim of developing a range of classical \(\infty\)-categorical results. The other frameworks tend to involve a specific model of either strict or weak infinite-dimensional categories.
Aside from direct applications to category theory, new kinds of type theories have been devised for the purpose of doing differential topology and stable homotopy theory synthetically, making heavy use of type-theoretic _modalities_[28; 68; 69; 70; 95; 96; 101].
## 2. The simplicial type theory
In [89], Riehl-Shulman develop a type theory to reason synthetically about \(\infty\)-categories. The key features of their theory is that \(\infty\)-categories can be described in relatively simple terms, and all the results are invariant under _homotopy equivalence_--the right notion of equivalence of \(\infty\)-categories. This is in stark contrast to the more traditional and familiar developments of \(\infty\)-category theory in set theory, cf. e.g. [49; 60]. We will give an overview of the structure and features of the simplicial type theory, with an emphasis on its use for synthetic \(\infty\)-category theory.
The theory builds on Martin-Lof intensional type theory (MLTT) [63] whose _intensional identity types_ have homotopically well-behaved _path objects_ as models [105; 7; 51; 87]. This homotopical interpretation, paired with Voevodsky's _univalence axiom_, which allows one to treat homotopy equivalent types as (intensionally) equal, goes by the name _homotopy type theory_ (HoTT) or _univalent foundations_ cf. [106; 108; 7]. Homotopy type theory may be thought of as a synthetic theory for \(\infty\)-groupoids (aka homotopy types) and thus provides a fertile basis for the simplicial type theory.
### Base theory: Martin-Lof intensional type theory Overview.
The base theory is intensional Martin-Lof type theory [63] with \(\Sigma\)-, \(\Pi\)-, and identity types. Though Rzx works with a universe types to implement dependent types, this assumption is not necessary [89, Remark 2.5].4 To stay in line with the notation of [89], we also rotate a dependent type \(x:A\vdash C(x)\) as a _type family_\(C:A\to\mathcal{U}\), pretending \(\mathcal{U}\) is a universe type (without being explicit about universe hierarchy or different levels of size).
Footnote 4: In particular, though convenient for certain applications, univalence is not necessary for our development.
\(\Sigma\)**-types (05-sigma).** The type formers \(\Sigma\) and \(\Pi\), resp., generalize existential and universal quantification, resp., as follows. For \(C:A\to\mathcal{U}\), the _dependent sum_\(\sum_{x:A}C(x)\) is the type consisting of dependent pairs \((a,c)\) with \(a:A\) and \(c:C(a)\). This is also referred to as the _total type_ of the family \(C\). The \(\Sigma\)-type comes with the usual set of rules for formation, introduction (by forming dependent pairs), and elimination (by projecting to the factors). We also assume the \(\beta\)- and \(\eta\)-computation rules to be satisfied, meaning that introduction and elimination are inverse to each other in the strictest possible way, _i.e., up to judgmental equality._
The family \(C:A\to\mathcal{U}\) can alternatively be encoded as a map \(p_{C}:\widetilde{C}\to A\), with the total type \(\widetilde{C}:=\sum_{x:A}C(x)\), and the projection \(p_{C}(a,c):=a\). The total type is then the "sum" of all the fibers of \(C\), canonically indexed by \(A\). If \(C\) is a constant family, _i.e._, \(C(a)\equiv B\) for all \(a:A\) and some type \(B\), the \(\Sigma\)-type becomes the _cartesian product_\(A\times B\).
\(\Pi\)**-types.** Of particular interest is the notion of _dependent function_ or _section_ of a family \(C:A\to\mathcal{U}\), which is an assignment \(\sigma\) to each element \(x:A\) of some element \(\sigma(x):C(x)\) in the corresponding fiber. This is reified as the _dependent product_ type \(\prod_{x:A}C(x)\), with introduction rule given by \(\lambda\)-abstraction and elimination rule by function application. Likewise, we require the \(\beta\)- and \(\eta\)-rules to hold judgmentally. When the type family \(C\) is constant with value some type \(B\), the dependent function type reduces to an ordinary function type, denoted by \(A\to B\) or \(B^{A}\).
**Identity types (01-paths).** The _Martin-Lof identity types_ (\(a=_{A}b\)) for a type \(A\) and elements \(a,b:A\) capture the idea that equality between terms of a type is witnessed proof-relevantly by a term \(p:a=_{A}b\). In the homotopical models, identity types get interpreted as _path objects_ in the sense of homotopical algebra [7], so elements \(p:(a=_{A}b)\) can be seen as paths from \(a\) to \(b\) in \(A\). The introduction rule is given by the canonical _reflexivity terms_\(\text{refl}_{a}:(a=_{A}a)\) witnessing self-identity. Elimination is given by the _path induction principle_. Intuitively, this says the following. First, for a type \(A\), fix \(a:A\). Then, for a family \(C:(\sum_{x:A}(a=_{A}x))\to\mathcal{U}\) the type of sections \(\prod_{(y,p):\sum_{x:A}(a=_{A}x)}C(y,p)\) is equivalent to \(C(a,\text{refl}_{a})\) via the map
\[\text{evrefl}_{C,a}:\left(\prod_{(y,p):\sum_{x:A}(a=_{A}x)}C(y,p)\right)\to C (a,\text{refl}_{a}).\]
In particular, given \(d:C(a,\text{refl}_{a})\) we obtain a section
\[\text{ind}_{=}(d):\prod_{(y,p):\sum_{x:A}(a=_{A}x)}C(y,p)\] (ind-path)
such that \(\text{ind}_{=}(d)(a,\text{refl}_{a})\equiv d\). Thus, for type families over (based) path types, to produce a section of the whole family it suffices to produce a section only at the reflexivity loop.
**The homotopy theory of types.** The following notions are due to Voevodsky [108], cf. also [7; 51; 87; 105]. According to the idea that terms \(p:(a=_{A}b)\) encode paths in a
type we want to express when a type is homotopically trivial _aka contractible_. This is witnessed by the type
\[\operatorname{isContr}(A)\coloneqq\sum_{x:A}\ \prod_{y:A}(x=_{A}y).\] (is-contr)
A contractible type \(A\) comes equipped with a canonical inhabitant, the _center of contraction_\(c_{A}:A\)(contraction-center) and a homotopy \(H_{A}:\prod_{y:A}(c_{A}=_{A}y)\)(contracting-htpy). Contractible types are equivalent to the point or _terminal type_\(\mathbf{1}\), see (contr-iff-terminal-map-is-equiv).
Traditional homotopy theory involves constructions on topological spaces that are invariant under _homotopy equivalence_, which is a pair of maps between two spaces in opposite directions whose composites are homotopic to the identity. Translating this into type theory, a map \(f:A\to B\) between types is a (homotopy) equivalence when there is a term inhabiting the type
\[\operatorname{isEquiv}(f)\coloneqq\sum_{g:B\to A}(g\circ f=_{A\to A} \operatorname{id}_{A})\times\sum_{h:B\to A}(f\circ h=_{B\to B} \operatorname{id}_{B}).\] (is-equiv)
This type is a _proposition_ in the sense that it is contractible whenever if it is inhabited. By (93, 12.1.3), this can equivalently be captured by the type
\[\operatorname{isProp}(A)\coloneqq\prod_{x,y:A}(x=_{A}y).\] (is-prop)
When a type \(A\) is a proposition (_i.e._, \(\operatorname{isProp}(A)\) is inhabited), then it can be treated as a _mere property_ (up to homotopy, _i.e._, a contractible and thus trivial choice of data) rather than additional _structure_. The fact that \(\operatorname{isEquiv}(f)\) is always a proposition hence means that being an equivalence is, in fact, a _property_ of a map, much in line with the expected intuition. It turns out there is a further equivalent characterization of when a map is an equivalence in that sense, namely if and only if all its _fibers_\(\operatorname{fib}(f,b)\coloneqq\sum_{x:A}(f(x)=_{A}b)\) are contractible, _i.e._,
\[\operatorname{isEquiv}(f)\simeq\prod_{b:B}\ \operatorname{isContr}( \operatorname{fib}(f,b)).\]
If type families are understood as _fibrations_\(p:\sum_{x:A}C(x)\to A\), then equivalences in this sense behave like _trivial fibrations_ (\(\operatorname{10-trivial-fibrations}\)) whose fibers are all contractible. These homotopical interpretations of Martin Lof's dependent type theory open up a whole area of research doing homotopy theory synthetically, cf. (93; 106).
**Function extensionality (FunExt).** While we do not require the univalence axiom in our formalization, we do make use of _function extensionality_, which is one of its consequences (93, Theorem 17.3.2): we will postulate the map
\[\operatorname{htpy-eq}:\prod_{X:\mathcal{U}}\prod_{A:X\to\mathcal{U}}\prod_{f \not\in\mathbb{I}_{X}A}(f=g)\to\prod_{x:X}(fx=gx)\]
defined via path induction by
\[\operatorname{htpy-eq}(X,A,f,f,\operatorname{refl}_{f},x)\coloneqq \operatorname{refl}_{f^{X}}\] (htpy-eq)
is an equivalence, _i.e._, there exists a term
\[\operatorname{funext}:\prod_{X:\mathcal{U}}\prod_{A:X\to\mathcal{U}}\prod_{f :g:\mathbb{I}_{X}A}\operatorname{isEquiv}(\operatorname{htpy-eq}_{X,A,f,g}).\]
**The \(\infty\)-groupoid structure on a type.** By (iterated) path induction one can prove the existence of functions
\[\prod_{x,y:A}(x=_{A}y)\to(y=_{A}x),\] (rev) \[\prod_{x,y:x:A}(x=_{A}y)\to(y=_{A}z)\to(x=_{A}z)\] (concat)
serving to reverse paths as well as concatenating them. One can show that these satisfy the expected groupoid laws, but only up to propositional equality, endowing every type canonically with the structure of a (weak) \(\infty\)-groupoid, cf. (47; 107).
While \(\infty\)-groupoids are special cases of \(\infty\)-categories, in a general \(\infty\)-category we require directed "arrows" that are not necessarily reversible. This suggests the following extensions of the underlying type theory.
### Extension 1: Cube and tope layers
Intuitively, a synthetic \(\infty\)-category is a type where directed arrows can be composed up to homotopy. To reason about directed arrows, their composites, and other shapes arising from this the idea is to introduce an appropriate shape theory to the type theory. The shapes will be part of the contexts so that type families and sections can depend on them.
Each shape is viewed as a _subshape_ embedded inside a _higher dimensional_ (_directed_) _cube_. This is reminiscent of the basic setup of _cubical type theory_(5; 16; 27; 30; 78).
For the _cube layer_, consider a new pretype \(2\), equipped with two distinct elements \(0,1:2\), and a binary relation \(\leq\) making \(2\) into a strict partial order with bottom element \(0\) and top element \(1\). The Lawvere theory generated by \(2\) constitutes the cube layer, _i.e._, the cubes are exactly the finite powers \(2^{n}\), with \(2^{0}\equiv 1\). The partial order is captured by a new judgment form, called a tope:
\[x,y:2+x\leq y\operatorname{\text{\rm\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rmrm{\rm{ \rm{ }}}}}}}}}}}}}\]
The _tope layer_ is a finitary intuitionistic logic over the cube layer. The intention is to carve out _subshapes_\(\Phi\subseteq I\) of a cube \(I\) by describing it via a formula on the cube variables. In general: if \(I\) is a cube and \(\varphi\) is a tope in context \(t:I\), written as a judgment \(t:I\vdash\varphi\) tope, then \(\Phi\coloneqq\{t:I\ |\ \ \varphi\}\) is the _shape_ corresponding to \(\varphi\). This way, one can define important shapes such as the \(n\)-simplex \(\Delta^{n}\), for \(n\in\mathbb{N}\), its boundaries \(\partial\Delta^{n}\), the _\((n,k)\)-horns_\(\Lambda^{n}_{k}\) for \(k\leq n\), and more.
E.g., we have the following formulas, cf. also Figure 1:
\[\Delta^{1} \coloneqq\{t:2\mid\top\}\subseteq 2\] \[\partial\Delta^{1} \coloneqq\{t:2\mid(t\equiv 0)\vee(t\equiv 1)\}\subseteq 2\] \[\Delta^{2} \coloneqq\{(t,s):2^{2}\mid\top\}\subseteq 2^{2}\] \[\partial\Delta^{2} \coloneqq\{(t,s):2^{2}\mid(s\equiv 0)\vee(s\equiv t)\vee(t\equiv 1 )\}\subseteq 2^{2}\] \[\Lambda^{2}_{1} \coloneqq\{(t,s):2^{2}\mid(s\equiv 0)\vee(t\equiv 1)\}\subseteq 2^{2}\] (03-simplicial-type-theory)
Like in cubical type theory, we connect the standard type layer with the cube and tope layer through a three-part context, which allows type families \(A\) to depend on a cube context \(\Xi\), a tope context \(\Phi\), and a type context \(\Gamma\), written as \(\Xi\mid\Phi\mid\Gamma\vdash A\).
The directed arrows in a type are now defined using our interval shape \(\Delta^{1}\) and another feature to be introduced, the _extension types_.
### Extension 2: Extension types (04-extension-types)
Let \(\Phi\subseteq\Psi\) be an inclusion of subshapes, in cube context \(I\). An _extension type_ as introduced in (Kurz, 2017), originally due to unpublished work by Lumsdaine and Shulman, captures the strict extension of a section defined on the smaller shape \(\Phi\) to the larger shape \(\Psi\). Concretely, assume given a type family \(I\mid\Psi\mid\Gamma\vdash A\) together with a partial section \(t:I\mid\Phi\mid\Gamma\vdash a(t):A(t)\) over the subshape \(\Phi\subseteq\Psi\). Then, the corresponding _extension type_ has as elements the strict extensions \(t:I\mid\Psi\mid\Gamma\vdash b(t):A(t)\) such that \(a|_{\Phi}\equiv b\). We denote the extension type by \(\big{\langle}\prod_{t:\Psi}A(t)\big{|}_{a}^{\Phi}\big{\rangle}\). In case \(A\) is a constant type, the ensuing extension type will be written as \(\big{\langle}\Psi\to A\big{|}_{a}^{\Phi}\big{\rangle}\).
In analogy to ordinary type-to-type function types, we can emulate shape-to-type function types by instantiating extension types by the "empty tope" \(\varphi\coloneqq\bot\) and the canonical term \(\mathsf{rec}_{\bot}\), allowing us to define the functions of shape \(\Psi\) into type \(A\) as \(\Psi\to A\coloneqq\big{\langle}\Psi\to A\big{|}_{\mathsf{rec}_{\bot}}^{\bot} \big{\rangle}\), and similarly for the dependent case.
**Extension extensionality (ExtExt).** Just as in (Kurz, 2017, Section 4), to make the extension types homotopically well-behaved, we also assume a version of function extensionality for extension types. In Rzk, we postulate an axiom that allows us to extend relative homotopies between extensions of a given partial section.
Namely, let \(I\) be a cube and \(\Phi\subseteq\Psi\subseteq I\) be a shape inclusion. Consider a type family \(A:\Psi\to\mathcal{U}\) with a partial section \(a:\prod_{t:\Phi}A(t)\). As in the case of dependent functions, we may use path induction to define a map for any \(f,g:\big{\langle}\prod_{t:\Psi}A(t)\big{|}_{a}^{\Phi}\big{\rangle}\) of the form
\[\mathsf{exthtpyeq}_{A,a,f,g}:(f=g)\to\big{\langle}\prod_{t:\Psi}f(t)=g(t) \big{|}_{\mathsf{ref}}^{\Phi}\big{\rangle}.\] (ext-htpy-eq)
As we did for function extensionality, we assert an extension extensionality axiom of the following form.
**Axiom 2.1** (ExtExt).: _For any \(A\), \(a\), \(f\), and \(g\) as above, the map (ext-htpy-eq) is an equivalence, i.e., there exists a term_
\[\mathsf{extext}:\prod_{A,a,f,g}\mathsf{isEquiv}(\mathsf{exthtpyeq}_{A,a,f,g})\]
In the original paper, Axiom 2.1 is derived from another version of the extension extensionality axiom (Kurz, 2017, Axiom 4.6). This version is analogous to the version of function extensionality that states that, given a family \(B:A\to\mathcal{U}\), then if every fiber \(Bx\) is contractible, then so is the type \(\prod_{xA}Bx\).
In the case of function extensionality, this is known to be equivalent to the version of function extensionality (FunExt). However, it is not known whether this equivalence also holds for extension types. Therefore, Riehl-Shulman assume the version appearing as (Kurz, 2017, Axiom 4.6) since they show that the other desired versions, such as (ExtExt), can be derived from it.
The axiom (Kurz, 2017, Axiom 4.6) is called _relative function extensionality_ (or _extension extensionality_), and it reads as follows. Let \(\Phi\subseteq\Psi\subseteq I\) be a shape inclusion and let \(A:\Psi\to\mathcal{U}\) be a family such that each \(A(t)\) is contractible. Then, given \(a:\prod_{t:\Phi}A(t)\), the type \(\big{\langle}\prod_{t:\Psi}A(t)\big{|}_{a}^{\Phi}\big{\rangle}\) is contractible. Our version (ExtExt) then follow as one of the consequences established in (Kurz, 2017, Proposition 4.8).
## 3. Synthetic \(\infty\)-categories
Simplicial type theory is a combination of the homotopical interpretation of Martin-Lof type theory with strict shape and extension types. As demonstrated in (Kurz, 2017, 2018, 2019, 2020), this framework is powerful enough to develop \(\infty\)-category theory synthetically, within a genuinely homotopical framework.
### Pre-\(\infty\)-categories and \(\infty\)-categories
**Hom types (hom).** Let \(A\) be a type with elements \(a,b:A\). The \(1\)-simplex serves as the shape for our directed arrows.
Thus, the type of _(directed) arrows_ or _homomorphisms_ from \(a\) to \(b\) is the extension type
\[\hom_{A}(a,b)\coloneqq\left\langle\Delta^{1}\to A\big{|}_{[a,b]}^{\partial \Delta^{1}}\right\rangle,\] (hom)
where \(t:\partial\Delta^{1}\vdash[a,b](t):A\) is the term with \([a,b](0)\equiv a\) and \([a,b](1)\equiv b\).
Figure 1. Some important shapes
**Identity arrows (id-arr).** By the introduction rule for extension types, any element \(x:A\) induces an arrow \(\mathsf{id}_{x}:\hom_{A}(x,x)\), \(\mathsf{id}_{x}:\Map{\lambda}s.x\).
**Segal types (05-segal-types).** We can now impose a synthetic version of the Segal condition [42, 99], witnessing that a type admits unique composition of arrows up to contractibility.
**Definition 3.1** (Segal types; is-segal).: A type is a _Segal type_ or a _pre-\(\infty\)-category_ if any composable pair of arrows has a unique composite, _i.e._, given a pair of arrows \(f:\hom_{A}(x,y)\) and \(g:\hom_{A}(y,z)\) the type of fillers
\[\sum_{h\hom_{A}(x,z)}\hom_{A}^{2}(f,g;h)\]
is contractible, where
\[\hom_{A}^{2}(f,g;h):=\left\langle\Delta^{2}\to A_{\left|\left[f,g;h\right] \right.\right\rangle}^{\left|\Delta\Delta^{2}\right.}\] (hom2)
is the type of 2-simplices bounded by a fixed choice of 1-simplices:
\[\mathsf{isSegal}(A):=\prod_{\begin{subarray}{c}x,y,xA\\ f\hom_{A}(x,y)\\ g\hom_{A}(y,z)\end{subarray}}\mathsf{isContr}\left(\sum_{h\hom_{A}(x,z)}\hom_{A }^{2}(f,g;h)\right)\]
This means, up to homotopy there exists a unique arrow \(g\circ f:\hom_{A}(x,z)\) acting as a composite of \(g\) and \(f\), together with a 2-cell \(\mathsf{comp}_{A,f\circ g}:\hom_{A}^{2}(f,g;g\circ f)\) that witnesses that the 2-simplex bounded by \(f\), \(g\), and \(g\circ f\), is filled, cf. Figure 2.
One can show that that the Segal condition of Definition 3.1 can be reexpressed by saying that the type \(A\) is local with respect to an inner horn inclusion.
**Theorem 3.2** (is-segal-iff-is-local-horn-inclusion).: _A type \(A\) is Segal if and only if restriction along the shape inclusion \(\Lambda_{1}^{2}\subseteq\Delta^{2}\) is an equivalence_
\[\mathsf{isEquiv}\left(\mathsf{res}_{A}:A^{\Delta^{2}}\to A^{ \Lambda_{1}^{2}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
arrows \(\varphi:\hom_{A\to\mathcal{B}}(f,g)\) (nat-trans). This definition automatically yields the expected naturality squares without having to specify them, (Kurz, 2017, Proposition 6.6).
Further instances of automatic naturality appear in SS3.2 and 5.
### Covariant families of discrete types
**Discrete types (07-discrete).** We are also interested in synthetic \(\infty\)-groupoids, meaning \(\infty\)-categories where every arrow is invertible.7 E.g., one can show that for any Segal type \(A\), the hom types \(\hom_{A}(x,y)\) are necessarily discrete. This matches up with the traditional theory and the intuition that \(\infty\)-categories are (weakly enriched) in _spaces_ as modeled by \(\infty\)-groupoids (Kurz, 2017).
Footnote 7: As shown in (Kurz, 2017, Section 7), one can drop the Rezkness assumption.
The groupoidal condition can be understood as a kind of _discreteness condition_. To make it precise, we need a comparison of paths with arrows, similarly to our treatment of Rezk completeness, cf. Section 3.1. Namely, for a type \(A\) we define
\[\operatorname{arreg}_{A}:\prod_{x,yA}(x=_{A}y)\to\hom_{A}(x,y)\] (ar-eq)
via path induction by
\[\operatorname{arreg}_{A}(x,x,\operatorname{ref}_{x}):\rightleftharpoons \operatorname{id}_{x}.\]
**Definition 3.4** (Discrete types; is-discrete).: A type \(A\) is _discrete_ or an \(\infty\)-_groupoid_ if
\[\operatorname{isDiscrete}(A):=\prod_{x,y:A}\operatorname{isEquiv}( \operatorname{arreg}_{A,x,y}).\]
This definition also yields the desired notion in the Segal object models (Kurz, 2017).
**Covariant families (08-covariant).** The \(\infty\)-categorical Yoneda lemma deals with families or _fibrations_ of discrete types indexed by a Segal type. These families \(C:A\to\mathcal{U}\) are supposed to be _functorial_ in the sense that an arrow \(f:\hom_{A}(x,y)\) in the base \(A\) should give a functor \(f_{*}:\rightleftharpoons_{C,f}:C(x)\to C(y)\) between the fibers.
This is achieved by the notion of _covariant family_, corresponding to what semantically is often called _left fibration_, after (Kurz, 2017, SS8) and (Kurz, 2017, Section 2.1), see also (Kurz, 2017, 2017, 2017, 2017, 2017, 2017, 2017, 2017).
To define it, we have to introduce a _dependent_ version of the hom type, capturing arrows in the total type \(\sum_{x:A}C(x)\) that get mapped to a prescribed arrow in the base. This can, once again, conveniently be formulated using extension types.
**Definition 3.5** (Dependent hom; dhom).: Let \(C:A\to\mathcal{U}\) be a type family. For elements \(x,y:A\), let \(f:\hom_{A}(x,y)\) be an arrow. For elements in the fibers \(u:C(x)\) and \(v:C(y)\), the corresponding _dependent hom type_ from \(u\) to \(v\) is given by the extension type
\[\operatorname{dhom}_{C(f)}(u,v):=\left\langle\prod_{t:\Lambda^{1}}C(f(t)) \right\rangle_{[u,\operatorname{gl}]}^{\Delta_{A}^{t}}.\]
The defining property for a covariant family \(C:A\to\mathcal{U}\) says that we can lift an arrow \(f:\hom_{A}(x,y)\) in the base, given a point \(u:C(x)\) in the fiber over its source, to a dependent arrow
\[\operatorname{lift}_{C,f,u}:\operatorname{dhom}_{C(f)}(u,f_{*}u)\] (covariant-transport)
lying over \(f\), and more so, uniquely up to homotopy, cf. Figure 3.
**Definition 3.6** (Covariant family; is-covariant).: Let \(C:A\to\mathcal{U}\) be a type family. We say \(C\) is _covariant_ if the following proposition is inhabited:
\[\prod_{x,yA}\prod_{f\hom_{A}(x,y)}\prod_{uC(x)}\operatorname{isContr}\left( \sum_{uC(y)}\operatorname{dhom}_{C(f)}(u,v)\right)\]
As shown in (Kurz, 2017, Section 8), it turns out that, over a Segal type \(A\), covariant families \(C:A\to\mathcal{U}\) behave in the expected ways. Namely, the fibers are all discrete (Kurz, 2017, Proposition 8.18), and they are _functorial_ in the following sense: for elements \(x,y,z:A\), morphisms \(f:\hom_{A}(x,y)\), \(g:\hom_{A}(y,z)\), and an element in the fiber \(u:C(x)\), we get identifications
\[\begin{array}{l}g_{*}(f_{*}u)=(g\circ f)_{*}u\text{ and }(\operatorname{id}_{x})_{ *}u=u,\\ (\text{id-arr-covariant-transport})\end{array}\]
see (Kurz, 2017, Proposition 8.16).
A fundamental example are the _representable_ covariant families of the form \(\hom_{A}(x,-):A\to\mathcal{U}\), for \(x:A\), when \(A\) is Segal (is-segal-representable-is-covariant).
Furthermore, between covariant families \(C,D:A\to\mathcal{U}\), a fiberwise map \(\varphi:\prod_{x:A}C(x)\to D(x)\) is automatically _natural_: for any arrow \(f:\hom_{A}(x,y)\) and element \(u:C(x)\) we have an identification
\[f_{*}(\varphi_{x}(u))=\varphi_{y}(f_{*}u).\] (naturality-covariant-fiberwise-transformation)
## 4. The Rzk proof assistant
Kudasov has implemented Rzk, the first proof assistant to support simplicial type theory. In our work since the spring of 2023, we have been developing a library for Rzk, formalizing a range of result from Riehl-Sublman's work (Kurz, 2017), and
Figure 3. A covariant family \(C:A\to\mathcal{U}\)
in addition to that also the required results from standard homotopy type theory (Steintein, 1977; Steintein, 1978). The formalizations in this paper have been written for and checked with Rzx version 0.5.4.
Syntax of the formalized code in Rzx is very close to the underlying theory, allowing for easy correspondence between statements in the code and on paper. However, proofs in Rzx may appear too detailed sometimes, since, being experimental, Rzx has not yet evolved enough syntactic sugar or tools like implicit parameters, tactics, or type classes to simplify proof construction.
### Key features of Rzx
The kernel of Rzx provides the following primitive notions and capabilities.
**The universes.** There are three fixed universes: \(\mathtt{CUBE}\) of cubes, \(\mathtt{TOPE}\) of topes, and \(\mathtt{U}\) of types. In Rzx, \(\mathtt{U}\) contains \(\mathtt{CUBE}\), \(\mathtt{TOPE}\), and itself, implying an unsound "type in type." We consider such simplification acceptable for the time being and hope that Rzx will evolve proper universes in the future.
**Type logic.** This includes both cubes and topes. Rzx has built-in unit cube 1 and directed interval cube 2 (with points \(\ast_{1}:\ 1\) and \(\emptyset_{2}:\ 2\) and \(1_{2}:\ 2\) correspondingly), standard topes (Steintein, 1977, Figure 2), and the inequality topes \(\ast\leq\mathtt{t}\) required for simplicial type theory. When done on paper, proofs in the tope logic are usually omitted as trivial, and we find that in our formalization project, only fairly small problems have been required for coherence checks. In fact, the most complicated checks we have are involved in the formalization of the general result for currying for extension types ((Steintein, 1977, Theorem 4.1); \(\mathtt{curry\text{-}uncurry}\)). Rzx offers full automation of the tope layer which helps keep the Rzx syntax and proofs simpler and automatically locate coherence issues in proof terms.
**Dependent types.** Rzx offers basic support for dependent functions (x : A) \(\rightarrow\) B x, dependent pairs \(\Sigma\) (x : A), B x, and identity types x =_{A} y. While at the moment of writing there is no support for user-defined implicit arguments, identity types allow the indices to be implicit with terms x = y and refl instead of x =_{A} y and refl_{x : A}, resp. Absence of implicit arguments and full type inference in Rzx induces more explicit and verbose proof terms.
**Extension types.** Rzx offers two separate concepts that result in support for extension types. First, Rzx allows dependent functions to have a cube or a shape (a cube restricted with a tope) argument. These correspond to extension types restricted to \(\mathtt{rec}_{\bot}\) at the empty tope \(\bot\).
Second, any type is allowed to have a "refinement," specifying values for arbitrary tope constraints. For example, a type A \([\phi\mapsto\mathtt{x},\ \psi\mapsto\mathtt{y}]\) is a refinement of type A such that values of this type are _computationally_ equal to x when \(\phi\) holds and to y when \(\psi\) holds. Of course, x and y must agree when (\(\phi\wedge\psi\)) holds. Refinements of a type are its subtypes, and A is considered equivalent to A \([\bot\mapsto\mathtt{recBOT}]\). The subtyping is handled by Rzx, removing the need for explicit type coercions.
Combining functions depending on shapes with such refinements yields extension types. For instance, \(\hom_{A}(a,b):=\left\langle\Delta^{1}\to A\middle|\middle|\middle|\middle|a \middle|\middle|\middle|\middle|\middle|\middle|\middle|\middle|\right\rangle\) (hom) is defined as follows:
#def hom (A : U) (x y : A) : U := (t : \(\Delta^{1}\)) \(\rightarrow\) A [t \(\equiv\emptyset_{2}\mapsto\mathtt{x}\), t \(\equiv\)1\({}_{2}\mapsto\mathtt{y}\)]
**Sections and variables.** Rzx supports Coq-style sections,9 allowing for locally defined assumptions (variables) which are automatically added as parameters to definitions that use them. Importantly, Rzx features a mechanism for detecting implicitly used assumptions to avoid accidental circular reasoning in definitions. To ensure that such an implicit assumption is not accidental, Rzx has the uses syntax. For example, the Yoneda lemma (yoneda-lemma) itself is specified in a way that makes explicit the use of function extensionality (funext).
#def yoneda-lemma uses (funext) ( A : U) ( is-segal-A : is-segal A) ( a : A) ( C : A \(\rightarrow\) U) ( is-covariant-C : is-covariant A C) : is-equiv ((z : A) \(\rightarrow\) hom A a z \(\rightarrow\) C z) (C a) (evid A a C) :=...
Footnote 9: [https://rzk-lang.github.io/rzk/v0.5.4/reference/sections.rzk/](https://rzk-lang.github.io/rzk/v0.5.4/reference/sections.rzk/)
We find this particularly useful for readability, highlighting the use of axioms or other assumptions (e.g. that a certain type is Segal).
## 5. The \(\infty\)-categorical Yoneda lemma in Rzx
**The statement.** In 1-category theory, the Yoneda lemma says the following. Given a category \(\mathbb{A}\) and a _copresheaf_10_on \(\mathbb{A}\), _i.e._, a functor \(C:\mathbb{A}\rightarrow\mathsf{Set}\), for any \(a\in\operatorname{ob}(\mathbb{A})\) there is a bijection
Footnote 10: Our formalization considers the covariant case as well as the dual contravariant case.
\[\hom_{[\mathbb{A},\operatorname{Set}]}(\hom_{\mathbb{A}}(a,-),C)\cong C(a)\]
mapping a natural transformation \(\alpha\) to \(\alpha(a,\operatorname{id}_{a})\in C(a)\), naturally in both \(C\) and \(a\).
In the \(\infty\)-categorical setting, sets get replaced by \(\infty\)-groupoids. Copresheaves are modeled by left fibrations _aka_ covariant families. Accordingly, the synthetic \(\infty\)-categorical Yoneda lemma reads as follows.
**Theorem 5.1** (\(\mathsf{v}\mathsf{o}\mathsf{n}\mathsf{a}\mathsf{-}\mathsf{lemma}\)).: _Let \(C:A\to\mathcal{U}\) be a covariant family over a Segal type \(A\). Then, for any \(a:A\) the map_
\[\mathsf{evid}_{A,aC}:\left(\prod_{z:A}\hom_{A}(a,z)\to C(z)\right)\to C(a)\]
_defined by_
\[\mathsf{evid}_{A,aC}(\varphi):=\varphi(a,\mathrm{id}_{a})\] (evid)
_is an equivalence._
Note this result holds for pre-\(\infty\)-categories, not just \(\infty\)-categories. For semantical accounts of the \(\infty\)-categorical Yoneda lemma see e.g. [52, 64, 84, 91], and [92, Chapter 5].
**The proof.** An inverse map is constructed using the covariant transport of \(C\). Namely, we define
\[\mathsf{yon}_{A,aC}:C(a)\to\left(\prod_{z:A}\hom_{A}(a,z)\to C(z)\right)\]
by
\[\mathsf{yon}_{A,aC}(u):=\pm x.x\mathit{f}.f.u.\] (yon )
In the \(1\)-categorical Yoneda lemma, a crucial part of the work is to show that the terms \(\mathsf{yon}_{A,aC}(u)\) defined by the inverse map are actually morphisms of presheaves, _i.e._, natural transformations. In our setting, this is, in fact, an automatic consequence from both \(C\) and \(\hom_{A}(a,-):A\to\mathcal{U}\) being covariant. In the formalization considerable work goes into showing a type \(A\) is Segal if and only if the type families \(\hom_{A}(a,-)\) are covariant; for the implication relevant here, see (is-segal-representable-is-covariant).
In more detail, if \(A\) is a Segal type, \(a:A\), and \(C:A\to\mathcal{U}\) is a covariant family, let \(\varphi:\prod_{z:A}\hom_{A}(a,z)\to C(z)\) be a family of maps. Then for any \(x,y:A\) and arrows \(f:\hom_{A}(a,x)\) and \(g:\hom_{A}(x,y)\), we have
\[g_{*}(\varphi(x,f))=\varphi(y,g\circ f)^{11} \tag{1}\]
as a special case of
\[(\mathsf{n}\mathsf{n}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{ n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n
theory. Both proofs follow the same outline, proving that (**evid**)is an equivalence by constructing a two-sided inverse. A point of difference in the agda-unimath proof is that the data of the inverse involves both the function (**y**on) together with a proof of its naturality. As with our proof in Rzk, one of the composites is directly identifiable with the identity, while the other requires a calculation together with two instances of function extensionality.
Other differences arise from the varying ways that categorical data is encoded in Rzk vs agda-unimath. There, precategories are types with additional structure while here pre-co-categories are types satisfying a property. There, representables are encoded as functors valued in the precategory of sets, while here representables are encoded as covariant type families. These differences have more of an effect on the syntax of the proof than its structural content.
At our request, Sina Hazratpour wrote a lean formalization of the 1-categorical Yoneda lemma, first as a self-contained formalization in Lean 3,14 with the proof of the Yoneda lemma later updated to Lean 4.15 Formal proofs in Lean are quite different than formal proofs in Rzk or in Agda because of the use of automation tactics in the interactive theorem proving mode, allowing the user to rewrite along known identifications or "simplify" the goal using known lemmas. In addition, Lean's use of type classes and automatic instance inference simplifies the syntax in the statement of the Yoneda lemma, as compared with the agda-unimath proof.
Footnote 14: [https://github.com/simp/CovariantYonedaLean3](https://github.com/simp/CovariantYonedaLean3)
Footnote 15: [https://github.com/simp/CovariantYonedaLean4](https://github.com/simp/CovariantYonedaLean4)
In the Lean 3 proof, the naturality of (**yon**) must again be checked explicitly via a proof that involves unfolding the definition of the representable functor and using the fact that functors preserve composition. The remainder of the proof proceeds as before. Interestingly, in the Lean 4 proof, Hazratpour proves a lemma -- (1) in the case where \(f\) is \(\operatorname{id}_{a}\) -- and then feeds it to the tactic aesop_cat,16 which then automatically verifies the naturality of (**yon**) and checks that the Yoneda maps are inverses.
Footnote 16: Asepo (Automated Extensible Search for Obvious Proofs) is a proof search for Lean 4; see [https://github.com/JLimpery/aesop](https://github.com/JLimpery/aesop)
Other formalizations of the 1-categorical Yoneda lemma appear in UniMath17(Lean, 2017) and mathlib.18
Footnote 17: [https://github.com/UniMath/UniMath/blob/7d7f6997dbe84b0d0107d4c963281c6efb97ff60/UniMath/](https://github.com/UniMath/UniMath/blob/7d7f6997dbe84b0d0107d4c963281c6efb97ff60/UniMath/)
## 7. Conclusions and future work
The formalization process led us to discover a mistake in the paper (Kudasov et al., 2017): the published proof of the "only if" direction of Proposition 8.13 employed circular reasoning.19 Fortunately, the stated result remains true. Our new formalized proof (is-segal-is-covariant-representable) now appears in (Kudasov et al., 2017).
Footnote 19: [https://teanprover-community.github.com/mathlib/docs/Mathlib/CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda](https://teanprover-community.github.com/mathlib/docs/Mathlib/CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda).
principle_, analogous to the well-known path induction principle for the identity types in standard Martin-Lof type theory.
Efforts in the direction of formalizing Buchholtz-Weinberger's proof of the cocartesian Yoneda lemma from (Buchholtz and Wein, 1986, Section 7) in Rzx are under way, but will require formalizing if not all then at least some of the preliminary structural properties and operations for cocartesian families from (Buchholtz and Wein, 1986, Section 5).
### Limits and colimits
In (Buchholtz and Wein, 2016), Bardomiano Martinez introduces limits and colimits of diagrams valued in Segal types and proves that right adjoints between Segal types preserve limits. We would like to formalize his results and explore further developments of the theory of limits and colimits.
### Improvements to Rzx
We note a few improvements for Rzx that would positively affect this and future formalization projects. First, supporting term inference and implicit arguments would help reduce the size of formalizations and, consequently, assist with readability. Second, the current implementation lacks incremental typechecking and proper module support, which makes the feedback on changes less immediate. Finally, while a minimal integration with an IDE exists,10 it still has to acquire proper language server support. We note also that Rzx's experimental diagram rendering feature21 (which is useful on small examples) could be extended further to assist with visualizations (or even interactive capabilities) for statements and constructions in simplicial type theory.
Footnote 21: there is a VS Code extension for Rzx at [https://github.com/rzk-lang/vscode-rzk](https://github.com/rzk-lang/vscode-rzk)
### Extensions of simplicial type theory
The simplicial type theory is not sufficiently powerful to prove all results of \(\infty\)-category theory contained for instance in (Rzx, 2016). A longer range goal would be to further extend this synthetic framework by including directed higher inductive types to freely generate \(\infty\)-categories, universes to classify covariant fibrations and cocartesian fibrations, and modalities for opposite \(\infty\)-categories and the \(\infty\)-groupoid core as outlined in (Buchholtz and Wein, 2016); see also (Buchholtz and Wein, 2016; Weinberger, 2016; Weinberger, 2016; Weinberger, 2016; Weinberger, 2016; Weinberger, 2016). If such theoretical developments were paired with experimental extensions to Rzx, that would greatly aid the process of exploring the expanded formal system.
## Acknowledgments
The authors are very grateful to Benedikt Ahrens, who first suggested the project of creating a proof assistant for the simplicial type theory. Fredrik Bakke contributed formalizations concerning the 2-category of Segal types and made invaluable improvements to the professionalization of the repository, drafting a style guide, overseeing its implementation, and suggesting improvements to our github workflow. Sina Hazratpour produced a formalized proof of the 1-categorical Yoneda lemma in Lean to provide a useful direct comparison. Abdelrahman Abounegm has contributed an Rzx plugin22 for MkDocs allowing for hyperlinks to the syntax-highlighted code used in this paper.
Footnote 22: [https://github.com/rzk-lang/mkdocs-plugin-rzk](https://github.com/rzk-lang/mkdocs-plugin-rzk)
|
2309.17260 | PlaceNav: Topological Navigation through Place Recognition | Recent results suggest that splitting topological navigation into
robot-independent and robot-specific components improves navigation performance
by enabling the robot-independent part to be trained with data collected by
robots of different types. However, the navigation methods' performance is
still limited by the scarcity of suitable training data and they suffer from
poor computational scaling.
In this work, we present PlaceNav, subdividing the robot-independent part
into navigation-specific and generic computer vision components. We utilize
visual place recognition for the subgoal selection of the topological
navigation pipeline. This makes subgoal selection more efficient and enables
leveraging large-scale datasets from non-robotics sources, increasing training
data availability. Bayesian filtering, enabled by place recognition, further
improves navigation performance by increasing the temporal consistency of
subgoals. Our experimental results verify the design and the new method obtains
a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks
with higher computational efficiency. | Lauri Suomela, Jussi Kalliola, Harry Edelman, Joni-Kristian Kämäräinen | 2023-09-29T14:12:54Z | http://arxiv.org/abs/2309.17260v4 | # PlaceNav: Topological Navigation through Place Recognition
###### Abstract
Recent results suggest that splitting topological navigation into robot-independent and robot-specific components improves navigation performance by enabling the robot-independent part to be trained with data collected by different robot types. However, the navigation methods are still limited by the scarcity of suitable training data and suffer from poor computational scaling. In this work, we present PlaceNav, subdividing the robot-independent part into navigation-specific and generic computer vision components. We utilize visual place recognition for the subgoal selection of the topological navigation pipeline. This makes subgoal selection more efficient and enables leveraging large-scale datasets from non-robotics sources, increasing training data availability. Bayes filtering, enabled by place recognition, further improves navigation performance by increasing the temporal consistency of subgoals. Our experimental results verify the design and the new model obtains a 76 % higher success rate in indoor and 23 % higher in outdoor navigation tasks with higher computational efficiency.
## I Introduction
Autonomous visual navigation is a well-studied problem in the field of robotics [1, 2]. One line of research frames navigation in known environments as _topological navigation_[3, 4, 5], meaning purely vision-based navigation between nodes of a topological map that are represented by images. The advantage of this approach is that it does not require building a geometric reconstruction of the operating environment [6] or training environment-specific control policies [7].
Topological navigation systems have two parts: _subgoal selection_ and a _goal-reaching policy_. First, the subgoal selection module chooses the map node to reach next as a subgoal. Then, the goal-reaching policy produces control commands to take the robot to the selected subgoal. A popular approach to subgoal selection is _temporal distance prediction_, which means predicting the number of time steps between the robot's current camera observation and the subgoal candidates [8, 9, 10, 11, 12, 13]. These learning-based models are trained with offline datasets of robot trajectories.
Previous works have demonstrated impressive real-world navigation [10, 12, 13], but the temporal distance prediction approach has two significant shortcomings. First, it utilizes neural architectures that iteratively take the current observation and each subgoal candidate as input, resulting in computational complexity scaling at \(\mathcal{O}(n)\) with the number of candidate images. This requires heuristics to limit the candidates and constrains the methods available for ensuring subgoal temporal consistency. Second, the fact that temporal distance prediction requires training data that originates from robots, actual or simulated, introduces an unnecessary data bottleneck. High-quality robotics datasets are very scarce compared to general web-scale data, and models trained with simulated data suffer from generalization issues [14].
We claim that subgoal selection is not a unique problem but an instance of the broader concept of image retrieval. To address this, we present _PlaceNav_, which frames the selection as a _place recognition_[15] task. This design provides three advantages. First, the large-scale and high-diversity datasets available for training place recognition models enhance subgoal selection robustness against changes in viewpoint and appearance. Second, subgoal selection is performed by a fast nearest-neighbor search over image embeddings. This, as shown in Fig. 1, provides superior scalability and removes the need for heuristics. Finally, place recognition readily integrates with methods for ensuring temporal consistency.
In summary, our contributions are
* Navigation approach that decouples training of subgoal selection models from robotics datasets by treating the selection as a generic place recognition task.
* Integration of learning-based subgoal selection with a Bayesian filter that improves temporal consistency.
* Demonstrations with real robots to experimentally validate our design.
Code and videos are available on the project page\({}^{\dagger}\).
Fig. 1: Visual place recognition finds which map image \(I_{s}\) was captured closest to the robot’s observation \(I_{t}\) by efficient matching of image embeddings \(\mathbf{z}\).
## II Related work
**Vision-based topological navigation.** In a recent trend, topological maps have been used to divide long-horizon tasks into short-horizon segments suitable for learned goal-reaching policies [8, 16]. An essential part of such a hierarchical structure is choosing which subgoal to reach next. One approach is to learn to predict the reachability between two images from simulated rollouts [17]. Savinov _et al_. [8] proposed to use the _temporal distance_, or the number of time steps \(\Delta t\), between the current observation and a subgoal as a proxy for reachability. Its key strength is that it can be learned from offline data. While alternative approaches exist [18], temporal distance is popular and has been adopted in several recent works [9, 10, 11, 12, 13]. However, the diversity and size of datasets suitable for temporal distance learning are modest. RECON [19] with 25 h, SACSoSN [20] with 75 h, and TartanDrive [21] with 5 h of navigation trajectories from single location each are notable examples. Furthermore, because of the model architectures utilized, the computational complexity of temporal distance prediction scales at O(n) with the number of subgoal candidates considered.
**Place recognition.** Place recognition involves recognizing places from images, often framed as image retrieval across images captured from different viewpoints and varying appearances [15]. It naturally integrates with topological navigation, as subgoal selection can be viewed as an image retrieval problem. Traditional methods for place recognition rely on aggregating handcrafted local features [22, 23, 24], but newer methods utilize deep learning to extract embeddings that can be compared efficiently using nearest-neighbor search [25, 26, 27]. The methods are trained to produce embeddings that are similar for images from the same place and dissimilar for images from different places, typically by classification loss [27] or ranking losses such as contrastive [28], triplet [25] or listwise [26] loss.
**Temporal consistency.** While subgoal temporal consistency has been studied in non-learned topological navigation literature [4], it has received limited attention in the context of learning-based methods. A robot moves along a route continually, so the transitions between the subgoals should be smooth. As a heuristic solution, SPTM [8] and GNM [12] adopted the approach of only considering subgoals within a sliding window centered on the previous subgoal. Meng _et al_. [17] utilize a similar approach but resort to global search when the window deviates from the robot's actual location.
In place recognition literature, the topic has received more attention. Early approaches [29, 30, 31, 32] utilized feature similarity matrices to find the best-matching image sequences. A newer line of work [33, 34, 35] considers descriptors that represent sequences instead of individual images. As an alternative, Xu _et al_. [36, 37] added a Bayesian filter to the matching process. In this work, we show that learning-based topological navigation also benefits from such methods.
## III System overview
In this section, we describe _PlaceNav_, our proposed navigation pipeline. First, we discuss the basic components and definitions of topological navigation. Then, we elaborate on our contributions related to subgoal selection via place recognition and subgoal temporal consistency.
### _Topological navigation fundamentals_
Autonomous navigation using topological maps generally consists of two stages. Before navigation, an operator has to perform a manual'reference run' to capture the desired route. The robot saves images along the route that compose the topological map \(\mathcal{M}\) for navigation.
During navigation, the robot-agnostic topological navigation algorithm is combined with a robot-specific controller. At each inference step \(t\), the current robot observation \(I_{t}\) is compared to the different subgoal candidate images \(I_{s}\in\mathcal{M}\) at nodes \(s=[0,1,\ldots,S]\) of the topological map. One of the nodes is selected as the next subgoal, and an image-based goal-reaching policy produces the motion plan to reach it. In this work, we experiment with the different subgoal selection methods and adopt the waypoint estimation approach proposed by Shah _et al_. [12] as the goal-reaching policy. This approach defines the motion plan as a sequence of \(\tau\) waypoints \(\{p_{i},\psi_{i}\}_{i},\ i=[0,1,\ldots,\tau]\) that guide the robot to the subgoal. The waypoints, defined as metric coordinates \(p_{i}\) and heading angle \(\psi_{i}\) in the robot's local coordinate frame, are tracked by a robot-specific controller.
### _Subgoal selection via place recognition_
PlaceNav introduces the following modifications to the subgoal selection procedure. Instead of computing temporal distances \(\Delta t\) between each observation and subgoal candidate pair, we use a place recognition model \(f_{enc}\) to process the observation and map images separately. The model produces image embeddings \(\mathbf{z}_{t}\) and \(\mathbf{z}_{s}\) that can be compared by Euclidean distance, enabling efficient subgoal search. Figure. 1 visualizes the concept.
**Training data availability.** The temporal distance prediction models are trained to predict the \(\Delta t\) between two image frames sampled from a trajectory driven by a robot. This limits the amount and diversity of potential training data. Place recognition methods can be trained with data from more generic sources. Images of different places, preferably captured at various points in time, from different viewpoints, and under different environmental conditions. The images' rough position and orientation information provide annotations. Google StreetView images, for example, are well-suited as training data. The sizes of place recognition datasets are in the order of millions of images [38, 39, 40], the SF-XL [27] alone consisting of 41M images.
**Computational complexity.** The computational complexity of temporal distance prediction scales linearly with the number of subgoal candidates considered at each inference step. Because of this, the number of subgoal candidates must be limited, which is commonly implemented as a _sliding window_ over the map nodes. The window is centered on the subgoal from the previous step, and only the nodes within the window are considered potential subgoals.
Place recognition enables computation and storage of the descriptors for the map images offline before robot operation. Thus, the inference computational complexity of subgoal selection is decoupled from the number of subgoal candidates being considered. The descriptors can be matched in milliseconds by nearest neighbor search [41]. Consequently, heuristics to limit the number of subgoal candidates are not needed from the perspective of computational budget.
### _Temporal consistency_
Limiting the number of subgoal candidates also enhances temporal coherence between inference steps, preventing erratic subgoal selection _e.g_. due to visually similar content. A sliding window over the map achieves this to some extent. However, the window may drift away from the robot's location, making correct subgoal selection impossible. Bayesian filtering, enabled by efficient matching of image embeddings, is an alternative strategy for enforcing temporal consistency.
Xu _et al_. [36] propose one such approach for use with place recognition methods, which we adapt for our problem. The idea, illustrated in Fig. 2, is to formulate place recognition as the measurement step of a discrete Bayesian state estimator. This filter maintains a belief of the robot's location over the map nodes by recursively updating its posterior distribution. We present the key equations here for completeness but refer the reader to the original paper for details.
Given an initial posterior belief distribution, a motion model propagates the belief into the future in a prediction step. If the robot's local movement (_i.e_. odometry) is not being tracked, as is the case with our system, the motion model is simply
\[p(s_{i}|s_{j})\propto\begin{cases}1&w_{l}\leq i-j\leq w_{u}\\ 0&\text{otherwise}\end{cases}\enspace. \tag{1}\]
From each node, the robot has an equal probability of moving up to \(w_{u}\) steps toward the goal, staying put, or moving \(w_{l}\) steps back. Other transitions have zero probability.
The prediction step is followed by a measurement step, where a place recognition query between the observation and the map nodes produces a measurement belief by the measurement function
\[p(\mathbf{z}_{t}|s,\mathcal{M})\propto g(\mathbf{z}_{t},s,\mathcal{M})=\exp(- \lambda_{1}\|\mathbf{z}_{t}-\mathbf{z}_{s}\|_{2})\enspace, \tag{2}\]
where \(\mathbf{z}_{t}\) is the observation embedding, \(\mathbf{z}_{s}\) is a map node embedding at the state \(s\) being considered, and \(\mathcal{M}\) is the map. \(\lambda_{1}\) scales the effect of each measurement on the posterior. Its value is automatically determined at the beginning of each navigation session as proposed by the authors [36].
The measurement belief is multiplied by the belief from the prediction step to acquire the posterior belief distribution \(p(s)\). The map node with the highest posterior probability is considered the closest to the latest observation. This filter significantly improves the stability of subgoal selection. Unlike the sliding window approach, the filter maintains full posterior belief over all map nodes, so it cannot get lost. It can solve the 'kidnapped robot' problem [42], whereas the sliding window requires the start node of the robot to be specified manually.
### _Implementation details_
**Architecture & Training.** For the place recognition part of the PlaceNav, we use a CosPlace network [27] because of its high performance and simple training process. The model architecture comprises a convolutional encoder and a generalized mean pooling (GeM) layer. The model is trained via classification loss, enabled by dividing the training data into distinct spatial groups. As the original CosPlace model was trained with high-resolution images (\(512\times 512\)), the training checkpoints provided by the authors do not work well with the \(85\times 64\) images we need to use for comparison with the baseline GNM model. For our experiments, we train a CosPlace model from scratch using an EfficientNet-B0 [43] backbone and the 41.2M images of the San Francisco eXtra Large (SF-XL) dataset [27], resized to \(85\times 85\) during training. The model was configured to extract 512-dimensional descriptors. Otherwise, we followed the training procedure outlined in [27]. We will refer to this low-resolution model as _CosPlace-LR_.
The Shah _et al_. [12] waypoint estimation was chosen as the goal-reaching policy. We do not retrain the model and use the pre-trained weights provided by the authors.
**Deployment.** During inference, the robot uses place recognition to identify the map node that best matches the current observation. The next node along the route after the best-matching node is selected as the subgoal. We implemented two distinct temporal consistency methods in the subgoal selection. The first is the sliding window used in prior works [8, 12, 17], and the second is our implementation of the discrete Bayesian filter proposed by Xu _et al_. [36]. At the beginning of a route, the sliding window is initialized to the first node. The initial belief distribution of the discrete filter is acquired from the first place recognition query, meaning that it operates in a 'kidnapped robot' mode. We set the discrete filter motion model transition boundaries to \(w_{u}=2\) and \(w_{l}=-1\) based on calibration experiments.
Fig. 2: The discrete Bayesian filter alternates place recognition measurement and motion model prediction. \(p(\mathbf{s})\), the posterior belief, determines the best matching node.
After choosing the subgoal to reach next, the goal-reaching policy predicts a series of 5 waypoints \(\{p_{i},\psi_{i}\}\). The robot follows these waypoints in an open-loop fashion until the next prediction using the robot's low-level controller. The prediction loop of subgoal selection and waypoint prediction runs at 5 Hz. The place recognition and waypoint estimation models receive \(85\times 64\) resolution images as input.
## IV Experimental setup
We performed navigation experiments with real robots in diverse environments to enable an informed comparison of PlaceNav and the prior work. We conducted 360 repetitions of different routes with different robots, subgoal selection methods, and temporal consistency approaches, adding up to a total of 19 km of navigation testing. We also evaluated the subgoal selection methods offline using a place recognition benchmark. With our experiments, we aim to answer the following research questions:
* **Q1.** Does subgoal selection require models trained with data originating from robots, or could more efficient place recognition models replace them?
* **Q2.** How robust are temporal distance prediction models to viewpoint and appearance changes?
* **Q3.** How does subgoal temporal consistency affect navigation performance?
### _Robot navigation experiments_
**The robots.** We experimented with two different robots, a Turtlebot2, and a Robotnik Summit XL Steel, shown in Fig. 3. Turtlebot is a small research platform intended for indoor use. Summit is a 4-wheeled skid-steer ground vehicle with a weight of 90 kg, load capacity of 130 kg, and a maximum velocity of 3 m/s. Both robots carry a front-facing 175\({}^{\circ}\) field-of-view fish-eye camera, which is used for navigation, and a laptop computer that runs the navigation algorithms. The laptop has a Nvidia Geforce GTX 1070 GPU and an Intel i7-7700 CPU. The deep learning components of the navigation stack run on the GPU.
**Baseline.** We use the temporal distance prediction from GNM [12] by Shah _et al_. as the baseline. We utilize the GNM for waypoint prediction in all experiments and only experiment with the subgoal selection. The model takes the observation and subgoal images concatenated along the channel dimension as inputs. It was trained with \(85\times 64\) images from 54 hours of navigation trajectories. The inference loop runs at 5 Hz. At each step, the node with the smallest temporal distance \(\Delta t\) above 3 is selected as the subgoal.
As the discrete filter requires the distance between the current observation and all the map nodes at each step, using it with the GNM is not computationally feasible. Thus, with GNM, we only utilize the sliding window.
**Indoor navigation.** The indoor experiments were conducted with the Turtlebot robot. We tested the methods along 20 different routes, performing 3 repetitions of each route with each method. Fig. 4 shows examples along the routes. Routes where all the methods fail were excluded from the quantitative analysis. The experiments took place at a university campus, with buildings from various eras with diverse appearances.
The lengths of the test routes were uniformly distributed in the range from 15 m to 35 m, and they contained various amounts of difficult content, such as turning maneuvers and passing narrow gaps. We chose routes that do not contain content that causes navigation failures because of errors in waypoint estimation. Such content, _e.g_. very narrow gaps, and turning maneuvers in open areas with few salient features can cause the robot to veer off course even though the subgoal selection algorithm is not failing.
**Outdoor navigation.** The outdoor experiments were conducted with the Summit XL robot. We experimented on 20 test routes in an urban environment, ranging from 50 m to 150 m in length. Like the indoor experiments, each test route was repeated 3 times with each method.
In indoor calibration tests before the actual experiments, correct subgoal selection led to accurate waypoint estimation. In outdoor tests, we observed more variability. This is likely due to increased environmental noise and larger appearance changes between the map images and robot operation. _e.g_. changes in ambient illumination led to complete waypoint estimation failure despite a good subgoal choice. Therefore, in the actual experiments, we maintained test run illuminance within a 10 000 lx range of the reference run. A Konica Minolta CL-70F CRI illuminance meter was used for this purpose.
**Evaluation criteria.** We follow the evaluation guidelines for image-goal navigation in [44] and measure the navigation performance by the _success rate_ (SR). Success rate describes the ratio of successful and failed repetitions of a test route.
Fig. 4: Examples along the test routes. The top row is from outdoor tests, bottom row is from indoors.
Fig. 3: The robots: A Turtlebot2 (left) and a Robotnik Summit XL Steel (right)
Repetition is considered successful when the robot reaches a pose where its camera view corresponds to the final node of the topological map. The subgoal selection must also localize to this node, triggering a navigation stop signal. We do not use distance-based success criteria, which are more common but less aligned with the robot's goal determination. While last-mile geometric navigation could enhance goal-reaching precision, as suggested by Wasserman _et al_. [45], it is unnecessary for the scope of this work. Repetition is considered a failure when the robot takes actions that prevent it from reaching the goal, _i.e_. sending the'reached goal' signal without the goal view in sight, risking immediate collisions, or deviating significantly from the test route without recovery prospects.
### _Offline evaluation_
To enable a reproducible comparison of temporal distance prediction and place recognition, we tested GNM on standard place recognition benchmarks. The VPR-Benchmark by Bertoli _et al_. [46] was used to facilitate the experiments. We modified the benchmark to enable the evaluation of GNM that simultaneously takes the query and reference images as inputs. A subset of the VPR-Benchmark datasets where the size of the test database is small enough that GNM can process it in a reasonable time was picked for evaluation. We compared the performance of our low-resolution CosPlace-LR model and the GNM model. The test images were resized to \(85\times 64\). For reference, we additionally evaluate the standard CosPlace model with full-resolution images.
Place recognition performance is assessed using the Recall@N score, which measures how often one of the top N images retrieved is within 25 m of the query image location. In the case of temporal distance prediction, the top N images are those with the smallest predicted temporal distances.
## V Results
### _Robot navigation experiments_
Tables I and II display the results of the navigation experiments with the Turtlebot and Summit XL.
**Indoors.** The GNM baseline has a notably lower success rate than the proposed place recognition based approaches. PlaceNav shows a 56% SR increase, which rises to 77% with the Bayesian filter. This observation is interesting given that GNM training includes indoor images from the GS4 [47] dataset, while CosPlace models are trained on outdoor Google Streetview images. Place recognition models exhibit broad generalization capabilities, learning features that generalize across domains.
Similar to Shah _et al_. [12], we split the test routes into 'Easy' and 'Hard' categories in posterior analysis. The categories are based on the number of narrow turns and tight passages along the routes. We chose a threshold that splits the routes into evenly sized groups. For indoor routes, this threshold was set to 4. 'Easy' routes had fewer than 4 such features, while 'Hard' routes had 4 or more. The categorization provides further insight into the method performances. While the differences in SR are minimal for 'Easy' routes, the advantage of place recognition is evident in the 'Hard' category. GNM's SR decreases by 50% from 'Easy' to 'Hard,' but PlaceNav maintains the same SR, even improving when the Bayesian filter is employed.
Introducing the Bayesian filter yields clear advantages over the sliding window method. Visual inspection confirms improved stability and performance during turns, especially in narrow gaps. The filter mitigates errors such as mid-turn direction changes that can arise because of the erratic behavior of the sliding window approach.
**Outdoors.** Outdoors, the success rate gap between methods is narrower compared to indoors, despite CosPlace-LR's outdoor training data. One possible explanation is the higher variation in waypoint estimation performance observed in the calibration tests. Consequently, the waypoint estimation module contributes more significantly to the final SR's, diminishing the effect of subgoal selection performance. The magnitude of the performance increase brought by the Bayesian filter is consistent with the indoor experiments, the improvement being around 10 percentage points in both cases. The differences in SR's between the 'Easy' and 'Hard' categories are similar to the indoor experiments. GNM's SR drops by half from 'Easy' to 'Hard,' whereas PlaceNav exhibits minimal to no performance decrease, emphasizing place recognition's effectiveness in maneuvers where having a correct subgoal is crucial.
In the analysis of outdoor experiments, an even distribution of routes between the 'Easy' and 'Hard' categories was achieved by a threshold of 3 turns or narrow passages per route. The occurrence of visually bursty content [49], characterized by repetitive geometric patterns that heavily influence image embeddings, poses a challenge for place recognition methods [15, p.12] on certain test routes (see Fig. 5). This issue, causing the relatively low SR of PlaceNav with the sliding window in the 'Easy' category can lead
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & Temporal & Easy & Hard & Total \\ & & filter & \(n=24\) & 36 & 60 \\ \hline GNM [12] & \multirow{2}{*}{\(T\)} & Window & \(\mathbf{0.67}\) & 0.33 & 0.47 \\ \hline \multirow{2}{*}{PlaceNav} & \multirow{2}{*}{P} & Window & 0.46 & 0.44 & 0.45 \\ & & Bayesian & \(\mathbf{0.67}\) & \(\mathbf{0.53}\) & \(\mathbf{0.58}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Outdoors experiment** success rates over 60 repetitions, driven with the Summit XL.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & Temporal & Easy & Hard & Total \\ & & filter & \(n=33\) & \(27\) & 60 \\ \hline GNM [12] & \multirow{2}{*}{\(T\)} & Window & 0.52 & 0.26 & 0.39 \\ \hline \multirow{2}{*}{PlaceNav} & \multirow{2}{*}{\(P\)} & Window & 0.62 & 0.60 & 0.61 \\ & & Bayesian & \(\mathbf{0.65}\) & \(\mathbf{0.77}\) & \(\mathbf{0.69}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Indoors experiment** success rates over 60 repetitions, driven with the Turtlebot.
to the sliding window method becoming trapped within the bursty map region, resulting in navigation failure. In contrast, the Bayesian filter handles bursty content more effectively by maintaining a full posterior belief across all map nodes and avoiding such entrapment. If the robot traverses the bursty segment successfully, the filter accurately localizes and completes the test route without issues.
### _Offline evaluation_
Here, we present the results of evaluating GNM and CosPlace on the VPR-Benchmark. We also discuss the impact of input resolution and domain shift on recall rates.
**Performance Comparison.** Table III shows a comparison of the retrieval performances of the GNM and CosPlace models across several benchmark datasets. Notably, temporal distance prediction performs worse than place recognition across all datasets. GNM's recall follows a similar trend as CosPlace-LR, with GNM achieving higher recall wherever CosPlace-LR excels. However, GNM's recall values are consistently an order of magnitude lower. For instance, on Tokyo24/7 and St. Lucia, where CosPlace-LR attains over 70% recall, GNM only reaches approximately 9%. On other datasets, GNM's performance is significantly lower.
**Impact of Input Resolution.** Decreasing image resolution has a substantial impact on CosPlace's performance. Reducing the resolution to \(85\times 64\) pixels decreases recall rates up to 58 percentage points on datasets like Tokyo24/7, MSLS, and SVOX. Interestingly, GNM's temporal distance prediction performs best on datasets where the performance differences between full and low-resolution CosPlace models are minimal, namely Pitts30k and St. Lucia. This suggests that GNM performance, too, would be improved by training the model with higher-resolution images.
**Viewpoint and Appearance Change.** Pittsburgh30k and Tokyo24/7 datasets capture images from various angles, while the rest feature images from front-facing vehicle cameras. Despite the similarity in viewpoint variation between GNM's training data and front-facing datasets, this is not reflected in recall rates. GNM performs well on Pittsburgh30k but poorly on SVOX. This discrepancy may stem from other factors contributing to domain shift between query and reference images. Besides viewpoint changes, Pittsburgh30k and St. Lucia exhibit limited variation. The other datasets contain shifts in illumination, weather, and camera which GNM struggles to handle, not having been explicitly trained for such invariance.
### _Runtime analysis_
Table IV shows average runtimes for PlaceNav and the baseline. Replacing temporal distance prediction with place recognition significantly reduces runtime, even without optimization. The runtime of PlaceNav is not affected by the window size. Introducing the discrete Bayesian filter aligns the PlaceNav's runtime with the original temporal distance baseline, allowing resource-performance trade-offs based on computational capacity and navigation needs.
## VI Conclusion
In conclusion, our findings show that place recognition enables more accurate subgoal selection than the temporal distance prediction methods at a lower computational cost. The offline evaluation suggests that the presence of appearance change between the reference run and robot operation would further amplify the difference. Our future work will focus on appearance invariant subgoal selection models and goal-reaching policies.
## Acknowledgment
The authors thank Jani Kiypyla, Olli Suominen, and Jussi Rantala from CIVIT for access to the Summit XL. We would also like to thank Maria Andrea Cruz Blandon and German F. Torres for their valuable comments on an earlier version of the manuscript.
|
2309.11611 | Hate speech detection in algerian dialect using deep learning | With the proliferation of hate speech on social networks under different
formats, such as abusive language, cyberbullying, and violence, etc., people
have experienced a significant increase in violence, putting them in
uncomfortable situations and threats. Plenty of efforts have been dedicated in
the last few years to overcome this phenomenon to detect hate speech in
different structured languages like English, French, Arabic, and others.
However, a reduced number of works deal with Arabic dialects like Tunisian,
Egyptian, and Gulf, mainly the Algerian ones. To fill in the gap, we propose in
this work a complete approach for detecting hate speech on online Algerian
messages. Many deep learning architectures have been evaluated on the corpus we
created from some Algerian social networks (Facebook, YouTube, and Twitter).
This corpus contains more than 13.5K documents in Algerian dialect written in
Arabic, labeled as hateful or non-hateful. Promising results are obtained,
which show the efficiency of our approach. | Dihia Lanasri, Juan Olano, Sifal Klioui, Sin Liang Lee, Lamia Sekkai | 2023-09-20T19:54:48Z | http://arxiv.org/abs/2309.11611v1 | # Hate speech detection in Algerian dialect using deep learning
###### Abstract
With the proliferation of hate speech on social networks under different formats, such as abusive language, cyberbullying, and violence, etc., people have experienced a significant increase in violence, putting them in uncomfortable situations and threats. Plenty of efforts have been dedicated in the last few years to overcome this phenomenon to detect hate speech in different structured languages like English, French, Arabic, and others. However, a reduced number of works deal with Arabic dialects like Tunisian, Egyptian, and Gulf, mainly the Algerian ones. To fill in the gap, we propose in this work a complete approach for detecting hate speech on online Algerian messages. Many deep learning architectures have been evaluated on the corpus we created from some Algerian social networks (Facebook, YouTube, and Twitter). This corpus contains more than 13.5K documents in Algerian dialect written in Arabic, labeled as hateful or non-hateful. Promising results are obtained, which show the efficiency of our approach.
Hate Speech Algerian dialect Deep Learning DziriBERT FastText
## 1 Introduction
Hate speech detection, or detection of offensive messages in social networks, communication forums, and websites, is an exciting and hot research topic. Many hate crimes and attacks in our current life started from social network posts and comments MacAvaney et al. (2019). Studying this phenomenon is imperative for online communities to keep a safe environment for their users. It also has a significant benefit for security authorities and states to ensure the safety of citizens and prevent crimes and attacks.
A universally accepted definition of hate speech is currently unavailable Bogdani et al. (2021) because of the variation of cultures, societies, and local languages. Other difficulties include the diversity of national laws, the variety of online communities, and forms of online hate speech. Various definitions are proposed.
According to the Encyclopedia of the American Constitution: "Hate speech is speech that attacks a person or group based on attributes such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity." Nockleby (2000). Today, many authors largely used this definition Guellili et al. (2022). Facebook considers hate speech as "a direct attack on people based on protected characteristics--race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some
protections for immigration status." 1. Davidson et al., who defines hate speech as "language that is used to express hatted towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group" propose one of the most accepted definitions Davidson et al. (2017). Alternatively, the one proposed by Fortuna et al., "Hate speech is a language that attacks or diminishes, that incites violence or hate against groups, based on specific characteristics such as physical appearance, religion, descent, national or ethnic origin, sexual orientation, gender identity or other, and it can occur with different linguistic styles, even in subtle forms or when humor is used." Fortuna and Nunes (2018).
Footnote 1: Community Standards; Available on:[https://www.facebook.com/communitystandards/objectionable_content](https://www.facebook.com/communitystandards/objectionable_content)
The literature review shows that the term _Hate speech_ (which is the most commonly used) has various synonym terms such as abusive speech, offensive language, cyberbullying, or sexism detection Schmidt and Wiegand (2017). Many works have been published in the context of hate speech detection for different standard and structured languages, like French Battistelli et al. (2020), English Alkomah and Ma (2022), Spanish Plaza-del Arco et al. (2021), and Arabic Albadi et al. (2018). These languages are known for their standardization with well-known grammar and structure, which make the language processing well mastered. However, detecting hate speech in dialects, mainly Arabic ones such as Libyan, Egyptian, and Iraqi, etc. is still challenging and complex work Mulki et al. (2019). Even if they are derived from the literal Arabic language, each country's specific vocabulary and semantics are added or defined.
In this work, we are interested in detecting hate speech in the Algerian dialect. This latter is one of the complex dialects Mezzoudj et al. (2019) characterized by the variety of its sub-dialects according to each region within the country. Algeria is a country with 58 regions; each one has a specificity in its spoken language with different words and meanings. The same word may have various meanings for each region; for example, '_Flouka_' in the east means '_earrings_.' In the north, it means '_small boat_'.
Moreover, new '_odd_' words are continually added to the Algerian vocabulary. The Algerian dialect is known for its morphological and orthographic richness. Facing this situation, treating and understanding the Algerian dialect for hate speech detection is a complex work. The importance of this project for the Algerian context encourages us to work on this problem.
To the best of our knowledge, only few works have been proposed for hate speech detection in the Algerian dialect Boucherit and Abainia (2022), Menifi et al. (2022). Some other related topics are treated like sentiment analysis Abdelli et al. (2019), sexism detection Guelli et al. (2021) which may be exploited to analyze the hate speech.
In this paper, we proposed a complete end-to-end natural language processing (NLP) approach for hate speech detection in the Algerian dialect. Our approach covers the main steps of an NLP project, including data collection, data annotation, feature extraction, and then model development based on machine and deep learning, model evaluation, and inference.
Moreover, we have evaluated various machine and deep learning architectures on our corpus built from diverse social networks (YouTube, Twitter, and Facebook) for several years (between 2017 and 2023). This corpus contains more than 13.5K annotated documents in Algerian dialect written in Arabic characters. Two classes are used for annotation (hateful, non-hateful). This work allows us essentially to provide a wealthy evaluation of many deep learning architectures, an essential value for academic and industrial communities. The obtained results are promising, and continuous tests are performed for further results.
This paper is structured as follows: Section 2 presents a necessary background, Section 3 reviews the most important related works, Section 4 details our proposed approach and evaluated models, Section 5 discusses the obtained results, and Section 6 concludes the paper.
## 2 Background
Hate speech is commonly defined as any communication that disparages a target group of people based on some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristic De Gibert et al. (2018).
### Hate speech
According to Al-Hassan and Al-Dossari (2019) hate speech is categorized into five categories: (1) gendered hate speech, including any form of misogyny and sexism; (2) religious hate speech including any religious discrimination, such as Islamic sects, anti-Christian, etc.; (3) racist hate speech including any racial offense or tribalism, and xenophobia; (4) disability including any sort of offense to an individual suffering from health problems; and (5) political hate speech can refer to any abuse and offense against politicians Guellil et al. (2022).
### Algerian Dialect and Arabic Languages
Arabic is the official language of 25 countries2. More than 400 million people around the world speak this language. Arabic is also recognized as the 4th most-used language on the Internet Boudad et al. (2018). Arabic is classified into three categories Habash (2022): (1) Classical Arabic (CA), which is the form of the Arabic language used in literary texts. The Quran is considered the highest form of CA text Sharaf and Atwell (2012). (2) Modern Standard Arabic (MSA) is used for writing and formal conversations. (3) Dialectal Arabic is used in daily life communication, informal exchanges, etc. Boudad et al. (2018) like the Algerian dialect, Tunisian dialect, etc.
The Algerian dialect on social networks can be written with Arabic characters (_a_z_z_l_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_z_l_z_z_l_z_l_z_l_z_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_z_
Guellili et al. (2020) proposed a system for detecting hateful speech in Arabic political debates. The approach was evaluated against a hateful corpus concerning Algerian political debates. It contains 5K YouTube comments in MSA and Algerian dialects, written in both Arabic and Latin characters. Both classical algorithms of classification (Gaussian NB, Logistic Regression, Random Forest, SGD Classifier, and Linear SVC(LSVC)) and deep learning algorithms (CNN, multilayer perceptron (MLP), LSTM, and BiLSTM) are tested. For extracting features, the authors use Word2vec and FastText with their two implementations, namely, Skip Gram and CBOW. Simulation results demonstrate the best performance of LSVC, BiLSTM and MLP.
Mohdeb et al. (2022) proposed an approach for analysis and the detection of dialectal Arabic hate speech that targeted African refugees and illegal migrants on the YouTube Algerian space. The corpus contains more than 4K comments annotated as Incitement, Hate, Refusing with non-hateful words, Sympathetic, and Comment. The transfer learning approach has been exploited for classification. The experiments show that the AraBERT monolingual transformer outperforms the mono-dialectal transformer DziriBERT and the cross-lingual transformers mBERT and XLM-R.
### Hate speech detection in other Arabic dialects
Various datasets or corpora were published in different dialects, which can be used for different purposes like hate speech, racism, violence, etc. detection.
ALBayari and Abdallah (2022) is the first work to propose a corpus built from Instagram comments. This corpus contains 198K comments, written in MSA and three different dialects: Egyptian, Gulf, and Levantine. The comments were annotated as neutral, toxic, and Bullying. Al-Ajlan and Ykhler (2018) and Haidar et al. (2019) datasets are collected from Twitter containing respectively 20K and 34K multi-dialectal Arabic tweets annotated as bullying and non-bullying labels. These tweets were from various dialects (Lebanon, Egypt, and the Gulf area). Moreover, two other datasets were proposed by Mubarak et al. (2017). The first one with 1.1K tweets in different dialects and the second dataset contains 32K inappropriate comments collected from a famous Arabic news site and annotated as obscene, offensive, or clean. Albadi et al. (2018) proposed the religious hate speech detection where a multi-dialectal dataset of 6.6K tweets was introduced. It included an identification of the religious groups targeted by hate speech. Alakrot et al. (2018) also provided a dataset of 16K Egyptian, Iraqi, and Libyan comments collected from YouTube. The comments were annotated as either offensive, inoffensive, or neutral.
T-HSAB Haddad et al. (2019) and L-HSAB Mulki et al. (2019) are two publicly available corpora for abusive hate speech detection. The first one is in the Tunisian dialect, combining 6K comments. The second one is in Levantine dialect (Syrian, Lebanese, Palestinian, and Jordanian dialects) containing around 6K tweets. These documents are labeled as Abusive, Hate, or Normal.
Mubarak et al. (2020) looked at MSA and four major dialects (Egyptian, Levantine, Maghrebi, and Gulf). It presented a systematic method for building an Arabic offensive language tweet dataset that does not favor specific dialects, topics, or genres with 10K tweets. For tweet labeling, they used the count of positive and negative terms based on a polarity lexicon. FastText and Skip-Gram (AraVec skip-gram, Mazajak skip-gram); and deep contextual embeddings, namely BERTbase-multilingual and AraBERT are used. They evaluated different models: SVM, AdaBoost, and Logistic regression.
Mulki and Ghanem (2021) introduced the first Arabic Levantine Twitter dataset for Misogynistic language (LeT-Mi) to be a benchmark dataset for automatic detection of online misogyny written in the Arabic and Levantine dialect. The proposed dataset consists of 6.5K tweets annotated either as neutral (misogynistic-free) or as one of seven misogyny categories: discredit, dominance, cursing/danning, sexual harassment, stereotyping and objectification, derailing, and threat of violence. They used BOW + TF-IDF, SOTA, LSTM, BERT, and Majority class as classifiers.
Duwairi et al. (2021) investigated the ability of CNN, CNN-LSTM, and BiLSTM-CNN deep learning networks to classify or discover hateful content posted on social media. These deep networks were trained and tested using the ArHS dataset, which consists of around 10K tweets that were annotated to suit hateful speech detection in Arabic. Three types of experiments are reported: first, the binary classification of tweets into Hate or Normal. Ternary classification of tweets into (Hate, Abusive, or Normal), and multi-class classification of tweets into (Misogyny, Racism, Religious Discrimination, Abusive, and Normal).
Aldjanabi et al. (2021) have built an offensive and hate speech detection system using a multi-task learning (MTL) model built on top of a pre-trained Arabic language model. The Arabic MTL model was experimented with two different language models to cover MSA and dialect Arabic. They evaluated a new pre-trained model 'MarBERT' to classify both dialect and MSA tweets. They propose a model to explore multi-corpus-based learning using Arabic LMs and MTL to improve the classification performance.
Haidar et al. (2017) presented a solution for the issue of cyberbullying in both Arabic and English languages. The proposed solution is based on machine learning algorithms using a dataset from Lebanon, Syria, the Gulf Area, and Egypt. That dataset contained 35K Arabic texts. In this research, Naive Bayes and SVM models were chosen to classify the text. The SVM model achieved greater precision.
Abdelali et al. (2016) The authors built a large dataset that consists of offensive Arabic words from different dialects and topics. The tweets were labeled into one of these categories: offensive, vulgar, hate speech, or clean. Since the offensive tweets involve implicit insults, the hate speech category was the tweets that contain racism, religious, and ethnic words. Different classifiers were employed in this study; the SVM model with a radial function kernel was mainly used with lexical features and pre-trained static embedding, while Adaptive Boosting and Logistic regression classifiers were employed when using Mazajak embedding. SVM gave the best precision.
_According to this literature analysis, we detect that the topic of hate speech detection in the Algerian dialect is not widely considered, and only few works deal with this problem. Furthermore, a lack of Algerian datasets prepared for hate speech is found. All these findings motivate our proposal._
## 4 Our Methodology
To identify hate speech in messages written in Algerian dialects--whether in Arabic or Latin script-- we outline a comprehensive methodology encompassing (1) data gathering, (2) data annotation, (3) feature extraction, (4) model development, and (5) model evaluation and inference. We'll delve into each of these stages in the subsequent sections.
### Data Collection
Data collection serves as the foundational step in our approach. To effectively train our models, we require a robust dataset in the Algerian Arabic dialect. To achieve this, we sourced our data from three distinct social networks spanning the years 2017 to 2023:
**1. YouTube**: Numerous Algerian channels have emerged on YouTube, dedicated to discuss various topics, including politics, religion, social issues, youth concerns, education, and more. We have identified and focused on the most influential ones with a significant following and engagement. We employ the YouTube Data API through a Python script to gather comments from various videos.
**2. Twitter**: Even if Algerian citizens do not widely use Twitter, we targeted it to collect tweets. We used a list of keywords to search for tweets. Many hashtags were launched between 2017 and 2023 about some situations and crises in Algeria, which enhanced the activity on Twitter, like [(do not buy oil of rebrab), [(do not buy oil of rebrab), [(some them all), [(mafia), [(no for fifth presidential term), etc. During this activity, we used these hashtags to collect an important number of tweets. Two techniques have been used for this objective: (1) Using Twitter API: Until February 2023, we were able to use this API for free and gather tweets. (2) Since February 2023, this API has become paid. Consequently, we used other solutions based on scrapping using the SNScrape library.
**3. Facebook**: To gather data from Facebook, we selected public pages talking and sharing content about politics, Algerian products, pages of some influencers, mobile operators, etc. We collected the posts, comments, and replies from these various pages. To collect data, we used different solutions: (1) Between 2017 and 2018, we were able to collect data from any public page using Graph API. (2) Since 2019, we have used either FacePager free application to collect data from public pages or (3) Facebook-scraper library for scraping.
From these sources, we have collected more than 2 million documents (messages) in different languages: Arabic, French, English, dialect, etc. The next step consists of filtering only documents written in Algerian dialects, either in Arabic or Latin characters. This work was done manually by a group of collaborators. At the end, we obtained around 900K documents.
### Data Annotation (Data Labeling)
To annotate data, we followed two approaches: automatic and manual. We have decided to annotate only the dialect written in Arabic characters. Our approach consists of building one model that detects hate speech only for Algerian dialects written in Arabic characters. Then, a transliteration function is developed to transliterate any Algerian document
written in Latin characters into Arabic ones, then use the built model to classify it. For example, "ma tech-rich zit el ailha" becomes "[]" []" which means "Don't buy the oil of the gods," which expresses the expensiveness of this oil.
We used a binary annotation: 0 expressing NON-HATE, which represents a document that doesn't contain any hateful or offensive word. 1 in case of a Hateful message containing any hateful word or the meaning and the semantics of the message expresses it.
**1- Automatic annotation**: For automatic annotation, we prepared a set of hateful keywords in the Algerian dialect discovered from our corpus. These words express the hate and the violence in Algerian speech. This list contains 1.298 words. This list of keywords has been used in a Python script to automatically tag a document with 1 if it contains at least one hateful keyword. In the other case, it is considered as 0. The automatically annotated corpus contains 200K Algerian documents written in Arabic characters.
**2- Manual annotation**: The automatically annotated documents have been validated manually. A group of annotators checked the annotated corpus and corrected the wrong-labeled documents. The manual step validates 5.644 documents considered for the next step.
**3- Dataset Augmentation for Enhanced Balance**: To bolster our dataset and enhance its equilibrium, we employed a strategy involving the incorporation of positively labeled subsets sourced from sentiment analysis datasets. In doing so, we reclassified these subsets as non-hateful, under the reasonable assumption that expressions of positive sentiment inherently exclude hate speech. Specifically, we leveraged the dataset available at [https://www.kaggle.com/datasets/djoughimehdi/algerian-dialect-review-for-sentiment-analysis](https://www.kaggle.com/datasets/djoughimehdi/algerian-dialect-review-for-sentiment-analysis), selecting solely the instances characterized by positive sentiment and relabeling them as 'normal.' However, due to preprocessing constraints, this process yielded a reduced set of just 500 documents.
Moreover, we used the corpus shared by Boucherit and Abainia (2022) containing 8.7K documents in the Algerian dialect. This dataset is labeled manually as Offensive (3.227), Abusive (1.334), and Normal (4.188). We changed the labels of this corpus into Hateful (1) for fused Offensive and Abusive ones and Non-Hateful (0) for Normal ones. This corpus has been filtered and treated to keep 7.345 labeled documents.
At the end of this step, we obtained an annotated balanced corpus of 13.5K documents in Algerian dialect written in Arabic characters, which will be used later to build classifiers.
### Data Preprocessing
Before using any dataset, a cleaning or preprocessing task should be performed. We have defined a set of functions orchestrated in a pipeline, as illustrated in Figure 1.
* Remove URL: All URLs in a document are deleted.
* Remove stop words: The list of Arabic stop words provided by NLTK is used to clean meaningless words. This list has been enriched by a set of stop words detected in the Algerian dialect.
* Replace special punctuation: some punctuation concatenation can represent meaning and have an added value for the model, like: :) Means happy, :( Means upset, etc. This kind of punctuation is transformed into the corresponding emoji.
Figure 1: Preprocessing pipeline
* Normalize Arabic numbers: Arabic numbers are transformed into the classic digits in order to standardize the writing, like \(\backslash\) = 1, \(\gamma\)=2, etc.
* Normalize Arabic: some letters have special symbols, which needs some treatment. Like: \(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(
**3. LSTM & BiLSTM with Dziri FastText:** LSTM and BiLSTM are one of the deep learning models that are suitable for NLP problems, mainly in text classification like sentiment analysis and even for hate speech detection. In this paper, we have tested these two models against our corpus. To learn the semantics and context of messages, we used FastText as a word embedding model. In our case, we fine tuned a Dziri FastText model. This later was trained on a huge dataset of Algerian messages in Arabic characters based on the Skip-gram model. The obtained model (Dziri FastText) is used to generate an embedding matrix for our built corpus of hate speech. The sequential architecture is composed of: (i) Embedding layer which is the input layer representing the embedding matrix; (ii) Dropout layer with a rate of 0.2 to prevent over-fitting; (iii) LSTM or Bidirectional LSTM layer with units=100, dropout=0.4, recurrent_dropout=0.2; (iv) Dropout layer with a rate of 0.2 to prevent over-fitting; (v) Output dense layer, using sigmoid as an activation function. As optimizer we used Adam, and we used binary crossentropy as a loss function, batch_size = 64 and epochs= 100.
**4. Dziribert-FT-HEAD:** Pre-trained transformers, like BERT, have become the standard in Natural Language Processing due to their exceptional performance in various tasks and languages. The authors in Abdaoui et al. (2021) collected over one million Algerian tweets and developed DziriBERT, the first Algerian language model, outperforming existing models, especially for the Latin script (Arabizi). This demonstrates that a specialized model trained on a relatively small dataset can outshine models trained on much larger datasets. The authors have made DziriBERT3 publicly available to the community.
Footnote 3: [https://huggingface.co/alger-ia/dziribert](https://huggingface.co/alger-ia/dziribert)
In this experiments we fine-tuned Dziribert, by incorporating a classification head while keeping the rest of the Dziribert parameters frozen. The classification head consists of three key components: a fully connected layer with 128 units, followed by batch normalization for stability, a dropout layer to mitigate overfitting, and a final fully connected layer that produces a single output value. We apply a sigmoid activation function to ensure the output falls between 0 and 1, which suits our binary classification task. Training employed the binary cross-entropy loss function and the Adam optimizer with a fixed learning rate of 1e-3. Additionally, a learning rate scheduler was employed to dynamically adjust the learning rate during training for improved convergence.
**5. DZiriBert with Peft+LoRA:** In our experiment, we fine-tuned the pre-trained model "DZiriBERT" using techniques called Peft (Parameter-Efficient Fine-Tuning) Mangrulkar et al. (2022) + LoRA Hu et al. (2021). These methodologies allowed us to tailor the model specifically for the Algerian dialect, making it sensitive to the unique nuances of this language. The Peft configuration is established using the LoRa technique. Parameters such as the reduction factor, scaling factor, dropout rate, and bias are defined according to the task requirements.
_Peft and LoRa Configuration:_ PEFT method has recently emerged as a powerful approach for adapting large-scale pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Given that fine-tuning such models can be prohibitively costly, PEFT offers a viable alternative by only fine-tuning a small number of (extra) model parameters. This greatly decreases the computational and storage costs without compromising performance.
LoRA is a technique specifically designed to make the fine-tuning of large models more efficient and memory-friendly. The essential idea behind LoRA is to represent weight updates using two smaller matrices (referred to as update matrices) through a low-rank decomposition. While the original weight matrix remains frozen, these new matrices are trained to adapt to the new data, keeping the overall number of changes minimal. LoRA has many advantages, mainly the: (1) Efficiency: by significantly reducing the number of trainable parameters, LoRA makes fine-tuning more manageable. (2) Portability: Since the original pre-trained weights are kept frozen, multiple lightweight LoRA models can be created for various downstream tasks. (3) Performance: LoRA achieves performance comparable to fully fine-tuned models without adding any inference latency. (4) Versatility: Though typically applied to attention blocks in Transformer models, LoRA's principles can, in theory, be applied to any subset of weight matrices in a neural network.
_Model Initialization:_ DZiriBERT is loaded and configured with Peft using the defined parameters. The model is then fine-tuned using the tokenized datasets. We configure our model using the LoraConfig class, which includes the following hyperparameters:
- Task Type: We set the task type to Sequence Classification (SEQ_CLS), where the model is trained to map an entire sequence of tokens to a single label. Target Modules: The target modules are set to "query" and "value".
- Rank (r): We employ a low-rank approximation with a rank =16 for the LoRA matrices.
- Scaling Factor (\(\alpha\)): The LoRA layer utilizes a scaling factor=32, which serves as a regularization term.
- Dropout Rate: We introduce a dropout rate of 0.35 in the LoRA matrices to improve generalization.
- Bias: The bias term is set to "none," reducing the model complexity.
_Training Process:_ The model is trained using custom training arguments, including learning rate, batch sizes, epochs, and evaluation strategies. The training process leverages the Hugging Face Trainer class, providing a streamlined
approach to model fine-tuning. We train our model with the following parameters:
-learning_rate=1e-3: Specifies the learning rate as 1e-3. Learning rate controls how quickly or slowly a model learns during the training process.
-per_device_train_batch_size=16: This indicates that each device used for training (usually a GPU) will handle a batch of 16 samples during each training iteration.
- per_device_eval_batch_size=32: Similar to the above, but for evaluation, each device will process batches of 32 samples.
- num_train_epochs=5: The training process will go through the entire training dataset 5 times. An epoch is one complete forward and backward pass of all the training examples.
- weight_decay=0.01: This is a regularization technique that helps prevent the model from fitting the training data too closely (overfitting). A weight decay of 0.01 will be applied.
- evaluation_strategy="epoch": Evaluation will be performed at the end of each epoch. This allows you to check the performance of your model more frequently and make adjustments if needed.
- save_strategy="epoch": The model will be saved at the end of each epoch, allowing you to revert to the model's state at the end of any given epoch if necessary.
- load_best_model_at_end=True: Once all training and evaluation are completed, the best-performing model will be loaded back into memory. This ensures that you always have access to the best model when your training is complete.
**6. Dzarashield:** We built the Dzarabert4 which is a modification of the original Dziribert model that involves pruning the embedding layer, specifically removing tokens that contain non-Arabic characters. This pruning significantly reduces the number of trainable parameters, resulting in faster training times and improved inference speed for the model. This approach is aimed at optimizing the model's performance for tasks involving Arabic-based text while minimizing unnecessary complexity and computational overhead. Dzarashield5 is built upon the Dzarabert base model by incorporating a classification head. This classification head consists of sequential architecture including: a linear layer (input: 768, output: 768), followed by a Rectified Linear Unit (ReLU) activation function; a dropout layer (dropout rate: 0.1); and another linear layer (input: 768, output: 2) for binary classification. The model's hyperparameters were determined through experimentation: a learning rate (lr) of 1.3e-05, a batch size of 16, and training for 4 epochs. The Adam optimizer was used with its default parameters for optimization during training. Experimentation resulted in a better score when updating all the weights of the model rather than freezing the base BERT model and updating the classification head.
Footnote 4: [https://huggingface.co/Sifal/dzarabert](https://huggingface.co/Sifal/dzarabert)
Footnote 5: [https://huggingface.co/Sifal/dzarashield](https://huggingface.co/Sifal/dzarashield)
**7. Multilingual E5 Model:** We conducted a fine-tuning process on a pre-existing model, specifically the Multilingual E5 base model Wang et al. (2022). Our primary objective was to ascertain the efficacy of a multilingual model within the context of the Algerian dialect. In adherence to the training methodology, the prefix "query:" was systematically introduced to each data row. This precautionary measure was deemed necessary Wang et al. (2022) to avert potential indications of performance deterioration that might arise in the absence of such preprocessing. The foundation of our investigation rested upon the initialization of the pre-trained base model using the xlm-roberta-base6 architecture, which was trained on a mixture of multilingual datasets. The model is fine-tuned with an additional Dense layer followed by a Dropout Layer. The model is trained with custom hyperparameters for fine-tuning (Warmup Steps: 100; Weight Decay: 0.01 ; Epoch: 5 ; Probability of Dropout: 0.1; Train batch size: 16 ; Evaluation batch size: 64)
Footnote 6: [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
**8. sbert-distill-multilingual Fine Tuned:** Similar to the Multilingual E5 Model, we fine-tuned a pre-trained model known as sbert-distil-multilingual model from sentence transformer to investigate how well a multilingual model performs in Algerian Dialect. The pre-trained model is based on a fixed (monolingual) teacher model that produces sentence embeddings with our desired properties in one language. The student model is supposed to mimic the teacher model, i.e., the same English sentence should be mapped to the same vector by the teacher and by the student model. The model is fine-tuned with an additional Dropout layer and a GeLU layer via K-Fold cross validation. The model is trained with custom hyperparameters for fine-tuning (Warmup Steps: 100; Weight Decay: 0.01; Probability of Dropout: 0.1 ; Epoch: 10 ; K-Fold: 4 ; Train batch size: 16 ; Evaluation batch size: 64)
**9 AraT5v2-HateDetect** AraT5-base is the result of testing the T5 model (mT5)7 on Arabic. For comparison, three robust Arabic T5-style models are pre-trained and evaluated on ARGEN dataset Nagoudi et al. (2021). Surprisingly, despite being trained on approximately 49% less data, these models outperformed mT5 in the majority of
ARGEN tasks, achieving several new state-of-the-art results. The AraT5v2-base-1024 model 8 introduces several improvements compared to its predecessor, AraT5-base :
- More Data: AraT5v2-base-1024 is trained on a larger and more diverse Arabic dataset. This means it has been exposed to a wider range of Arabic text, enhancing its language understanding capabilities.
- Larger Sequence Length: This version increases the maximum sequence length from 512 to 1024. This extended sequence length allows the model to handle longer texts, making it more versatile in various NLP tasks.
- Faster Convergence: During the fine-tuning process, AraT5v2-base-1024 converges approximately 10 times faster than the previous version (AraT5-base). This can significantly speed up the training and fine-tuning processes, making it more efficient.
- Extra IDs: AraT5v2-base-1024 supports 100 sentinel tokens, also known as unique mask tokens. This allows for more flexibility and customization when using the model for specific tasks.
Overall, these enhancements make AraT5v2-base-1024 a more powerful and efficient choice for Arabic natural language processing tasks compared to its predecessor, and it is recommended for use in place of AraT5-base. AraT5v2-HateDetect9 is a fine-tuned model based on AraT5v2-base-1024, specifically tailored for the hate detection task. The fine-tuning process involves conditioning the decoder's labels, which include target input IDs and target attention masks, based on the encoder's source documents, which consist of source input IDs and source attention masks. After experimentation, the following hyperparameters were chosen for training AraT5v2-HateDetect (Training Batch Size: 16; Learning Rate: 3e-5; Number of Training Epochs: 4). These hyperparameters were determined to optimize the model's performance on the hate detection task. The chosen batch size, learning rate, and training epochs collectively contribute to the model's ability to learn and generalize effectively for this specific NLP task.
Footnote 8: [https://huggingface.co/UBC-NLP/AraT5v2-base-1024](https://huggingface.co/UBC-NLP/AraT5v2-base-1024)
Footnote 9: [https://huggingface.co/Sifal/AraT5v2-HateDetect](https://huggingface.co/Sifal/AraT5v2-HateDetect)
Footnote 10: [https://pypi.org/project/lang-trans/](https://pypi.org/project/lang-trans/)
### Evaluation and Inference
To evaluate the different models, we used four main metrics: Accuracy, Precision, F1-Score, and Recall. To classify a message in case where it is written in Arabizi (a specific dialect using Latin characters), a transliteration process was implemented to convert the text into Arabic characters based on lang-trans11 library.
Footnote 11: [https://huggingface.co/Sifal/AraT5v2-HateDetect](https://huggingface.co/Sifal/AraT5v2-HateDetect)
## 5 Experiments and Results
To train and evaluate our models, we used TensorFlow or Pytorch deep learning frameworks. We used Google Colab and Kaggle GPUs to accelerate the experiments. In table 1, we will provide the detailed results that we obtained.
_Linear Support Vector Classifier (LinearSVC)_: The LinearSVC model offered a competitive accuracy but struggled with the recall for the hate speech class. The precision and recall trade-off indicates possible challenges in differentiating between the subtle nuances of hate and non-hate speech in the dialect. The model exhibited high precision and recall for class 0 but showed room for improvement for class 1, particularly in terms of recall. This suggests that while the model is quite good at identifying class 0, it could be improved for identifying class 1.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model Name & Accuracy & Precision & Recall & F1 Score \\ \hline LinearSVC & 0.83 & 0.84(Class0); 0.72(Class1) & 0.96(Class0); 0.36 (Class1) & 0.9(Class0); 0.48 (Class1) \\ gzip + KNN & 0.67 & 0.63 & 0.56 & 0.60 \\ Dziribert-FT-HEAD & 0.83 & 0.81 & 0.81 & 0.81 \\ LSTM & 0.70 & 0.61 & 0.75 & 0.67 \\ Bidirect LSTM & 0.68 & 0.59 & 0.81 & 0.68 \\ DZiriBERT FT PEFT+LoRA & 0.86 & 0.83 & 0.85 & 0.84 \\ Multilingual-E5-base FT & 0.84 & 0.8 & 0.81 & 0.80 \\ sbert-distill-multilingual FT & 0.80 & 0.74 & 0.81 & 0.77 \\ Dzarashield & 0.87 & 0.87 & 0.87 & 0.87 \\ AraT5v2-HateDetect & 0.84 & 0.83 & 0.84 & 0.83 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of each model (FT: Fine Tuned
_gzip + KNN_: One of the worst models in terms of capabilities, although it is diverging from the baseline it is unclear whether these results will hold in out of distribution cases, especially when we know that there is no underlying process in the model that captures semantic representations of the documents.
_Dziribert-FT-HEAD:_ the model exhibits a noteworthy precision score, signifying its accuracy in correctly classifying instances as hate speech or not. However, the relatively lower recall score suggests that it missed identifying some hate speech instances. This discrepancy might be attributed to the model's lack of specialized handling for the nuances of the Algerian dialect, potentially causing it to overlook certain hate speech patterns unique to that context.
Despite this, the model's overall accuracy remains commendably high, indicating its robust performance in making accurate predictions. Additionally, the balanced precision and recall values underline its ability to strike a reasonable trade-off between minimizing false positives and false negatives, a crucial aspect in hate speech detection.
The F1 Score, being the harmonic mean of precision and recall, further validates the model's capacity to effectively identify positive samples while avoiding misclassification of negative ones. The model consistently demonstrates strong performance across multiple evaluation metrics, especially in terms of accuracy and F1 score. These results reaffirm the practicality and effectiveness of employing deep learning techniques for the challenging task of hate speech detection.
_LSTM and BiLSTM with FastText-DZ:_ Unfortunately, the results of this model are among the worst ones. The literature shows the strength of LSTM and BiLSTM in this kind of NLP project, but this is not the case for this project. The low precision is due to the incapability of the model to classify correctly the hate class. FastText is a good word embedding model that captures the context and semantics of a document. However, in this case, it does not perform well because of the fine-tuning done where we took an Arabic FastText and fine-tune it on Algerian dataset written in Arabic characters.
_DZirBiBert with Pef+LoRA:_ We utilize both PEFT and LoRA to fine-tune DZiriBERT, a model specifically adapted to the Algerian dialect. By employing these techniques, we were able to create a highly effective and efficient model for hate speech detection in the Algerian dialect while keeping computational costs at a minimum.
_Multilingual-E5-base Fine Tuned and short-distill-multilingual Fine Tuned_: The outcomes obtained from these models are noteworthy; nonetheless, their performances pale when compared with the parameter-efficient fine-tuning on the DZiriBERT model.
_DzaraShield_: The results returned by this model are satisfying considering the relatively low quantity of data it was finetuned on, this exhibits further that the pretraining plays the major role on downstream takes such as classification in our case, especially that the base model is an encoder only architecture which captures contextual information from the input data, making it useful for a wide range of text classification tasks.
_AraTSv2-HateDetect_: The results are slightly inferior to Dzarashiield. One possible explanation is the increased complexity of the architecture when compared to the Dzarabert base model. Consequently, fine-tuning becomes a more intricate task due to the larger hyperparameter search space and the limited resources in terms of computing power and data availability. As a result, it is reasonable to expect that these models would perform similarly in real-world scenarios.
### Results Discussion
The DzaraShield model has demonstrated remarkable capability in detecting hate speech in the Algerian dialect. Its outstanding precision score highlights its reliability in accurately identifying instances of hate speech. Additionally, it maintains a balanced precision and recall, indicating that it does not excessively sacrifice precision to achieve its higher recall. Such a balanced model holds considerable advantages, particularly when both false positives and false negatives carry significant consequences.
For the other models, mainly LSTM or BiLSTM with Dziri FastText, more fine-tuning should be performed to enhance the results. Moreover, future work may include hyperparameter tuning, class balancing techniques, or the integration of more complex models to improve performance across both classes.
The disparity between precision and recall in certain models warrants further investigation. Delving deeper into this issue could yield valuable insights into specific aspects of the dialect that might be contributing to this imbalance. Future experiments should prioritize understanding and addressing these discrepancies, with the goal of enhancing recall without compromising precision.
The results from various experimental models underscore the intricacies involved in hate speech detection in the Algerian dialect. While traditional machine learning and deep learning approaches provided some valuable insights, they fell short in capturing the dialect's nuanced characteristics. In contrast, the DzaraShield model emerged as the most successful approach, emphasizing the pivotal role of Encoder-only models in the realm of projects of this nature.
These findings offer valuable insights for future work in this area and underscore the potential of leveraging domain-specific knowledge, advanced fine-tuning techniques, and sophisticated architectures for the effective detection of hate speech in under-studied and complex dialects such as Algerian.
## 6 Conclusion
The importance of hate speech detection on social networks has encouraged many researchers to build solutions (corpora and classifiers) to detect suspect messages. The literature review shows that most works are interested in text in structured languages like English, French, Arabic, etc. However, few works deal with dialects, mainly the Algerian one, which is known for its complexity and variety. To fill in the gap, we propose in this paper a complete NLP approach to detect hate speech in the Algerian dialect. We built an annotated corpus of more than 13,5K documents, which is used to evaluate various deep learning architectures. The obtained results are very promising, where the most accurate was the DzaraShield.
Looking ahead, there is significant potential to enhance inference speed, particularly for the Dziribert-based and multilingual models. While this project primarily focused on Arabic characters, our next step will be to address the dialect when written in Latin characters. Embracing both Arabic and Latin characters will more accurately capture the nuances of the written Algerian dialect. Finally, we plan to expand our corpus size and explore alternative deep-learning architectures.
## 7 Acknowledgments
We would like to thank every person who has contributed to this project: Micha Freidin, Viktor Ivanenko, Piyush Aaryan, Yassine Elboustani, Tasneem Elyamany, Cephars Bonacci, Nolan Wang and Lydia Khelifa Chibout. We would also like to thank Omdena organization for giving us this valuable opportunity.
|
2309.05664 | Superconductivity-induced improper orders | The study of improper phases in the context of multiferroic materials has a
long history, but superconductivity has yet to be connected to the network of
ferroic orders. In this work, we highlight an overlooked mechanism that couples
superconducting order parameters to odd-parity orders in the charge or spin
sectors such that the latter emerge as improper orders. For that, we explore a
novel perspective of nonsymmorphic symmetries based on extended symmetry groups
in real space. We highlight how nonsymmorphic symmetries can generate rather
nonintuitive couplings between order parameters. In particular, we find that a
bilinear in the superconducting order parameter can couple linearly to
odd-parity orders in centrosymmetric systems. Our findings can account for the
unusual phenomenology of CeRh$_2$As$_2$, a recently discovered heavy fermion
superconductor, and open the door for exploring nonsymmorphic symmetries in the
broader context of improper orders with potential applications to functional
materials. | Andras Szabo, Aline Ramires | 2023-09-11T17:59:16Z | http://arxiv.org/abs/2309.05664v1 | # Superconductivity-induced improper orders
###### Abstract
The study of improper phases in the context of multiferroic materials has a long history, but superconductivity has yet to be connected to the network of ferroic orders. In this work, we highlight an overlooked mechanism that couples superconducting order parameters to odd-parity orders in the charge or spin sectors such that the latter emerge as improper orders. For that, we explore a novel perspective of nonsymmorphic symmetries based on extended symmetry groups in real space. We highlight how nonsymmorphic symmetries can generate rather nonintuitive couplings between order parameters. In particular, we find that a bilinear in the superconducting order parameter can couple linearly to odd-parity orders in centrosymmetric systems. Our findings can account for the unusual phenomenology of CeRh\({}_{2}\)As\({}_{2}\), a recently discovered heavy fermion superconductor, and open the door for exploring nonsymmorphic symmetries in the broader context of improper orders with potential applications to functional materials.
The Landau theory of phase transitions has been a leading framework for understanding ordered phases of matter. It has been successfully applied to describe magnetic and electric ordering and their non-trivial interplay in multiferroic systems, which are central in the pursuit of exotic functional materials [1; 2]. Within multiferroics, improper ferroelectrics develop electric polarization controlled by the development of a leading distortive or magnetic order parameter [3; 4; 5; 6]. More generally, improper phases are associated with order parameters that develop as a secondary effect of the development of a leading order. The interplay of superconductivity with magnetic and charge orders is empirically known in multiple families of materials and extensively discussed in the framework of intertwined [7] or vestigial orders [8]. Nevertheless, their relation in the context of improper orders has not yet been fully investigated, as the complexity of superconducting order parameters has only recently started to be acknowledged. In this work, we explore the untapped realm of superconducting-induced improper orders, highlighting the role of nonsymmorphic symmetry for the development of unexpected couplings between order parameters, which can potentially explain the unusual phenomenology recently reported in a material in the family of heavy fermion systems.
The development of ordered phases of matter can be understood on the phenomenological level based on the notion of symmetry breaking associated with the onset of an order parameter. In crystalline solids, the primary symmetries involved are spatial, generally accompanied by time-reversal symmetry. Spatial symmetries include translations, rotations, and reflections, as well as combinations of these. Particularly notable are nonsymmorphic systems, as these feature symmetry transformations that are necessarily accompanied by a fractional primitive lattice vector (PLV) translation. These systems have been extensively explored in the context of topological band structures [9; 10; 11; 12; 13], and such symmetries are key to protecting band degeneracies of Bloch's states with opposite parity at specific points in
momentum space [14; 15]. However, the effects of nonsymmorphic symmetries in the context of ordered states of matter are less explored, and most efforts have been focused on their topological classification [16; 17; 18]. Similarly, much of the research in the context of superconductivity relies on the analysis of point group symmetries, the crystalline symmetries that are left once one factors out translations, a trivial procedure in symmorphic systems. Nonsymmorphic symmetries complicate the process of factoring out translations, making the group theoretical analysis and the classification of superconducting states more cumbersome, particularly if one works in a momentum space representation [19; 15]. Here, we take a complementary view of nonsymmorphic symmetries in the context of ordered phases of matter. We classify the order parameters in the superconducting, charge, and spin sectors based on their textures directly in real space [20], taking explicit account of the nonsymmorphic nature of the crystal [21]. From this analysis, we find new types of coupling between superconducting order parameters and orders in the spin or charge sectors, which can induce odd-parity improper orders by the development of a primary order in the superconducting channel, see Fig. 1. Our results could lead to new functionalities and technological applications by highlighting an unapped mechanism for the development of improper orders and bringing new connectivities between superconductivity and other functional phases of matter.
The recently discovered heavy-fermion compound CeRh\({}_{2}\)As\({}_{2}\), depicted in Fig. 2 (a), realizes an exotic phase diagram with two superconducting phases as a function of a \(c\)-axis magnetic field [22]. While the pairing symmetry of the two phases is to date unclear, nuclear magnetic resonance (NMR) [23] and nuclear quadrupole resonance (NQR) [24] measurements on the As sites reveal antiferromagnetic order coexisting with the low-, but not with the high-field superconducting phase. Most intriguingly, specific heat and thermal expansion measurements indicate a single phase transition [25], suggesting that the onset of superconductivity coincides with that of magnetism in the entire low-field phase, in a range of magnetic fields spanning 4 T. Both NMR and NQR measurements observed site-selective broadening of the signal within the magnetic phase: of the two inequivalent As sites in the unit cell, only one of them experiences a change in the local magnetic field with the onset of magnetic ordering. These observations constrain the possible textures for magnetic moments localized at the Ce sites. Within homogeneous phases, the only consistent
Figure 1: Order parameter coupling within Landau theory. Assuming a primary superconducting order parameter \(\mathbf{\Psi}\) (potentially multicomponent), and a secondary order \(\phi\), this figure highlights the distinct phenomenology that emerges from two different types of coupling between them. Left: For a biquadratic coupling, \(f_{\lambda}=\lambda|\phi|^{2}|\mathbf{\Psi}|^{2}\), there are two transition temperatures and both order parameters onset with the characteristic \(\sqrt{T}\) temperature dependence. Right: for linear-quadratic coupling, \(f_{\lambda}\sim\lambda\phi(\Psi_{1}^{*}\Psi_{2}\pm\Psi_{1}\Psi_{2}^{*})\), there is a single transition temperature and the subleading order parameter \(\phi\) onsets with a slower, linear temperaure dependence. Here \(\lambda\) is a generic coupling constant. More details are given in the SI.
order is a layered antiferromagnet, with magnetic moments along the \(c\)-axis, changing sign across the sublattices [see Figs. 2 (b) and 3 (b)]. Two further families of solutions with in-plane magnetic moments can be obtained by doubling the unit cell [see Figs. 2 (c) and 3 (c)]. Below, we discuss how these constraints on the magnetic order impose strong restrictions on the nature of the superconducting state in this system.
Within the Landau formalism, the coexistence of superconducting and magnetic orders implies a coupling between the corresponding order parameters consistent with all operative symmetries. The most straightforward gauge-invariant coupling preserving time-reversal symmetry is quadratic in both the superconducting (\(\Psi\)) and magnetic (\(M\)) order parameters, e.g. \(\sim M^{2}|\Psi|^{2}\). This coupling, however, does not result in the same critical temperatures for the two phases. From the phenomenology of improper orders, a linear-quadratic coupling between \(M\) and \(\Psi\) with a dominant superconducting order would lead to the onset of magnetism at the superconducting critical temperature (see Fig. 1 and discussion in the SI). For this type of coupling, we need to recourse to a multi-component superconducting order parameter, \(\mathbf{\Psi}=(\Psi_{1},\Psi_{2})\), which would allow for a term \(\sim iM(\Psi_{1}\Psi_{2}^{*}-\Psi_{1}^{*}\Psi_{2})\) in the free energy.
In CeRh\({}_{2}\)As\({}_{2}\), the homogeneous magnetic order is odd parity, as the magnetic moments are opposite for sites related by inversion symmetry [see Fig. 3 (b) and (c)]. Due to the global inversion symmetry, order parameters classified from the perspective of point group symmetries are generally labeled as either even or odd, and superconducting order parameter bilinears are invariably even parity. This simplified view naively makes a linear-quadratic coupling between magnetic and superconducting order parameters impossible in CeRh\({}_{2}\)As\({}_{2}\), as it would require \(\Psi_{1}\) and \(\Psi_{2}\) to be of opposite parity. Below, we show that in the absence of accidental degeneracies, modulated superconducting orders in nonsymmorphic systems can sustain multi-component order parameters with opposite parity, allowing for this unusual coupling and for the emergence of magnetism as an improper order triggered by a primary superconducting order.
More concretely, taking as a working example the space group \(P4/nmm\) (\(\#129\)), we now discuss how unusual irreducible representations (irreps) with components of opposite parity emerge once we enlarge the symmetry group by extending the unit cell to account for modulated order parameters. The 16 symmetry operations in \(P4/nmm\) are generated by \(\bar{C}_{4z}=\{C_{4z}|\mathbf{t}_{\mathbf{x}}/2\}\), a rotation by \(\pi/2\) along the z-axis followed by \(\mathbf{t}_{\mathbf{x}}/2\), half a PLV translation along
Figure 2: (a) Crystal structure of CeRh\({}_{2}\)As\({}_{2}\), with centrosymmetric nonsymmorphic space group \(P4/nmm\) (\(\#\) 129). The Ce atoms span a body-centered tetragonal lattice with the main fourfold rotation axis going through these sites. The crystal field due to the Rh and As atoms (dark and light grey spheres, respectively) breaks inversion symmetry at the Ce sites, generating a Ce sublattice structure (blue and red spheres), characterizing this system as locally noncentrosymmetric. Notably, the inversion center is located at the midpoint between Ce sublattices, (b) Original unit cell (green) projected to the \(x-y\) plane containing two Ce atoms (1 and 2) and (c) enlarged unit cell (magenta) containing four Ce atoms (1 through 4). Dashed line represents the diagonal mirror plane, red cross is the global inversion centre, coinciding with our chosen origin. Translation by one lattice constant (indicated by \(\mathbf{t}_{\mathbf{y}}\)) is a trivial operation in the original unit cell scenario, as it takes a sublattice into itself. In contrast, in the enlarged unit cell scenario a translation by one lattice constant constitutes a new operation.
the x-axis; \(\sigma_{d}=\{\sigma_{d}|\mathbf{0}\}\), a mirror reflection along the diagonal plane (\(x=y\)); and inversion \(i=\{i|\mathbf{0}\}\) (the complete list of group operations in the standard Seitz notation is given in the SI). Given the nonsymmorphic nature of the space group, these 16 symmetry operations, when composed, do not close into themselves. If we are interested in homogeneous phases, we can redefine the composition of these operations modulo integer lattice vector translations such that they form a group. This procedure corresponds to factoring out translations to determine the little group at the \(\Gamma\) point in momentum space [15], which is isomorphic to \(D_{4h}\). For this group, the symmetry elements are organized in 10 conjugacy classes, leading to 10 irreps which are labeled as even or odd parity (see details in the SI).
If we allow the system to develop modulations encompassing multiple unit cells in a commensurate manner, we need to consider the corresponding wave-vector dictating the modulation. If the wave vector corresponds to a point in momentum space at the edge of the BZ, we expect unusual degeneracies in nonsymmorphic systems, as is extensively discussed in the context of electronic band structures in momentum space. Choosing the simplest scenario, here we double the unit cell and introduce the notion of an _extended group_ by adding to the original group new symmetry operations, which take one primitive unit cell into another in the doubled unit cell [20]; see Fig. 2 (b) and (c). The extended symmetry group is formed by the original 16 operations plus the composition of these with a PLV translation, here chosen to be \(E^{\prime}=\{E|\mathbf{t}_{\mathbf{y}}\}\), which is the extension of the identity operation \(E=\{E|\mathbf{0}\}\) (extended symmetry operations are denoted with a prime). Extending the group of symmetries to 32 elements leads to novel irreps (see SI for details). If the space group is symmorphic, the total number of irreps is simply doubled, and the new irreps behave as the original ones up to an extra minus sign under operations including a PLV translation. In contrast, if the group is nonsymmorphic, the irreps associated with the extended group can be fundamentally distinct from the original irreps.
To understand how nontrivial irreps are generated, we rely on two elementary results from group theory: (i) the number of irreps is equal to the number of conjugacy classes; (ii) the dimensions of irreps should follow the identity \(\sum_{i}|d_{i}|^{2}=|\mathbf{G}|\), where \(d_{i}\) is the dimension of the \(i\)-th irreducible representation and \(|\mathbf{G}|\) is the order of the group (the number of elements in the group). The order of the extended group in our example is twice the order of the original group. For symmorphic systems, the number of irreps of a given dimension is also doubled, trivially satisfying points (i) and (ii). For nonsymmorphic systems, the conjugacy classes in the extended group are not twice as many as in the original group. This happens because some of the original and extended operators necessarily coalesce into a single conjugacy class. This point can be understood by considering the conjugation of a generic spatial symmetry operation \(O_{B}=\{R_{B}|\mathbf{t}_{B}\}\) by another generic operation \(O_{A}=\{R_{A}|\mathbf{t}_{A}\}\):
\[O_{A}^{-1}.O_{B}.O_{A}=O_{C}=\{R_{A}^{-1}R_{B}R_{A}|R_{A}^{-1}(R_{B}\mathbf{t }_{A}-\mathbf{t}_{A}+\mathbf{t}_{B})\}, \tag{1}\]
where the dot denotes the composition of operations, and we used \(O_{A}^{-1}=\{R_{A}^{-1}|-R_{A}^{-1}\mathbf{t}_{A}\}\). The presence of inversion operation in P4/nmm allows us to choose \(O_{A}=\{i|\mathbf{0}\}\), such that the RHS of the equation above reads \(\{R_{B}|-\mathbf{t}_{B}\}\). In the original symmetry group, associated with the primitive unit cell, if \(O_{B}\) is nonsymmorphic with \(\mathbf{t}_{B}\) corresponding to half a PLV translation, we can redefine \(-\mathbf{t}_{B}=\mathbf{t}_{B}+PLV\equiv\mathbf{t}_{B}\). As a consequence, \(O_{C}=O_{B}\) and we do not get any information about conjugacy classes. On the other hand, in the extended symmetry group \(-\mathbf{t}_{B}=\mathbf{t}_{B}+PLV\not\equiv\mathbf{t}_{B}\), such that \(O_{C}=O_{B}^{\prime}+O_{B}\). The relation \(O_{A}^{-1}.O_{B}.O_{A}=O_{B}^{\prime}\) indicates that \(O_{B}\) and \(O_{B}^{\prime}\) belong to the same conjugacy class.
class in the extended group. We conclude that all conjugacy classes containing nonsymmorphic operations in the original symmetry group are enlarged in the extended symmetry group. This coalescence of conjugacy classes tells us that we have less than twice as many irreps in the extended group, and, consequently, the new (double-valued) irreps are generally not one-dimensional. The complete analysis is summarized in Table 1 and a more detailed discussion is provided in the SI. In this example, there are 14 conjugacy classes, therefore 14 irreps. There are 4 new two-dimensional irreps in the extended group (labelled as \(E_{im}\), \(i=1,...,4\), with \(m\) standing for mixed parity), with the unusual property of having zero character associated with inversion symmetry.
To better understand this last statement, we can conjugate inversion with a nonsymmorphic operation. Choosing \(O_{B}=\{i|\mathbf{0}\}\), we find the RHS of Eq. 1 reads \(\{i|-2R_{A}^{-1}\mathbf{t}_{A}\}\). If \(O_{A}\) is nonsymmorphic and \(\mathbf{t}_{A}\) is half a PLV, \(-2R_{A}^{-1}\mathbf{t}_{A}\) is a PLV, and therefore nonsymmorphic group elements inevitably connect inversion \(i=\{i|\mathbf{0}\}\) with \(i^{\prime}=\{i|\mathbf{t}_{\mathbf{y}}\}\), leading to the enlargement of the associated conjugacy class under the group extension. This result has strong implications. The fact that the conjugacy class associated with the inversion operation is enlarged tells us that the character associated with the new irreps is zero [15]. As a consequence, the new two-dimensional irreps are associated with two-component basis functions with components of opposite parity. This fact is directly associated with known results in electronic band theory: our extended group is isomorphic to the abstract group \(G_{32}^{2}\), the little group at the \(M\) point [15]. This remarkable result challenges our intuition about inversion symmetry. In centrosymmetric systems with nonsymmorphic symmetries inversion symmetry can be "effectively broken" if the system develops textures with
\begin{table}
\begin{tabular}{c|c c c c|c c c c c c c c c} \(G_{16}^{9}\) & \(E\) & \(\tilde{C}_{2z}\) & \(2\bar{C}_{4z}\) & \(2\bar{\sigma}_{x}\) & \(2\sigma_{d}\) & \(i\) & \(\tilde{\sigma}_{h}\) & \(2\bar{S}_{4}\) & \(2\bar{C}_{2x}\) & \(2C_{2\bar{d}}\) \\ & \(\swarrow\searrow\swarrow\swarrow\swarrow\) & \(\swarrow\searrow\) & \(\swarrow\searrow\) & \(\swarrow\searrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) \\ \(G_{32}^{2}\) & \(E\) & \(E^{\prime}\) & \(\tilde{C}_{2z}\) & \(\tilde{C}_{2z}^{\prime}\) & \(4C_{4z}\) & \(4\bar{\sigma}_{x}\) & \(2\sigma_{d}\) & \(2\sigma_{d}^{\prime}\) & \(2i\) & \(2\sigma_{h}\) & \(4\bar{S}_{4}\) & \(4\bar{C}_{2x}\) & \(2C_{2\bar{d}}\) & \(2C_{2\bar{d}}^{\prime}\) \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \(A_{2g}\) & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 & -1 \\ \(B_{1g}\) & 1 & 1 & 1 & 1 & -1 & 1 & -1 & -1 & 1 & 1 & -1 & 1 & -1 & -1 \\ \(B_{2g}\) & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 \\ \(E_{g}\) & 2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & 0 \\ \(A_{1u}\) & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 \\ \(A_{2u}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\ \(B_{1u}\) & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\ \(B_{2u}\) & 1 & 1 & 1 & -1 & 1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 \\ \(E_{u}\) & 2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & -2 & 2 & 0 & 0 & 0 & 0 \\ \hline \(\bar{E}_{1m}\) & 2 & -2 & 2 & -2 & 0 & 0 & 2 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(E_{2m}\) & 2 & -2 & 2 & -2 & 0 & 0 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(E_{3m}\) & 2 & -2 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -2 \\ \(E_{4m}\) & 2 & -2 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 2 \\ \end{tabular}
\end{table}
Table 1: Character table for P4/mmm modulo two integer lattice translations. The first line gives the 10 conjugacy classes for P4/mmm modulo integer lattice translations (isomorphic to \(D_{4h}\) and the abstract group \(G_{16}^{9}\)[15]). The second line encodes the 14 conjugacy classes in P4/mmm modulo two integer lattice translations (isomorphic to abstract group \(G_{32}^{2}\)[15]). The symmetry operations are labeled according to their associated point group operation. If the operation is accompanied by a half PLV translation, it is marked with a bar, while if it is accompanied by two orthogonal half PLV translations, it is marked with a tilde. Operations without either a bar or a tilde are pure point operations. Operations marked with a prime belong to the set of operations extended by a PLV translation. The arrows indicate if the original conjugacy class splits (\(\swarrow\searrow\)) or doubles (\(\suit\)) in the extended group. The first 10 irreps have well-defined parity, and their labels follow that of the \(D_{4h}\) point group. The 4 last irreps are new to the extended group and have mixed parity. The subscripts correspond to even (g), odd (u), or mixed (m) parity. In gray color, we highlight the columns that are simply doubled for the initial 10 irreps given the splitting on the conjugacy classes. In yellow, we highlight the columns with zero characters due to the coalescence of the original and extended operations in the same conjugacy class.
wave-vectors that lie at the edge of the BZ.
Going back to the discussion on CeRh\({}_{2}\)As\({}_{2}\), using the original group structure modulo PLV (the little-group at the \(\Gamma\) point), we can classify the order parameters in the charge, spin, and superconducting sectors in terms of irreps, which are either even or odd under parity (details in SI). In contrast, if we classify the order parameters according to extended group (see details in SI), we find orders associated with irreps labeled as \(E_{im}\) having two components that transform differently under inversion. Curiously, the magnetic order with in-plane moments discussed above belongs to the irrep \(E_{3m}\), but it cannot couple to any superconducting order through the desired linear-quadratic term (see discussion in SI). On the other hand, the magnetic order with moments along the \(c\)-axis is associated to the irrep \(A_{1u}\), which is odd parity, and can couple linearly to quadratic terms in the superconducting order parameter if the latter is associated with irreps \(E_{3m}\) or \(E_{4m}\). The latter type of coupling has strong implications for the phenomenology. Magnetism develops as an improper order, and its temperature dependence does not follow the standard \(\propto\sqrt{T}\) behaviour expected for a leading order parameter within Landau theory, as depicted in Fig. 1. Improper orders onset with a weaker temperature dependence \(\propto T\), which might make their experimental observation more difficult. A slow onset for the improper magnetic order in CeRh\({}_{2}\)As\({}_{2}\) is consistent with the apparent magnetic critical temperature being lower than the superconducting critical temperature [23; 24]. These results should trigger a more in-depth investigation to determine the temperature at which magnetism emerges. If magnetism onsets exactly at \(T_{c}\), our analysis suggests that we have strong constraints to the nature of both magnetism and superconductivity. It necessarily indicates that the magnetic moments are aligned along the z-axis and that superconductivity is of a very unusual type: a chiral PDW with two components of opposite parity.
Classifying order parameters in real space taking into account nonsymmphoricity by dealing with extended symmetry groups allows us to systematically account for modulated unconventional orders and novel types of couplings
Figure 3: Representative orders. Inversion-odd \(P_{z}\) polar (a) and \(M_{z}\) layer antiferromagnet (b) orders, adhering to the original unit cell (green). (c) Two components of \(E_{3m}\) in-plane magnetic order (d) two components of \(E_{3m}\) superconducting order in the enlarged unit cell (magenta). The pairing wave function is intrasublattice but antisymmetric between sites, resulting in a modulation in the projected x-y plane (\(+/-\), marked by green and orange respectively). In (c) and (d) the components are related to each other via 90 degree rotation, but one component is odd-parity, while the other is even.
between them. Nonsymmorphic symmetries are related to intrinsically complex crystalline structures. Nonsymmorphic crystals necessarily have sublattice structures that add to the already complex set of charge, orbital, and spin degrees of freedom necessary to faithfully describe the electronic structure of most materials. The richness in the number of internal degrees of freedom is nevertheless still strongly constrained by crystalline symmetries, and the investigation of the development of improper orders can lead to very refined information about the nature of order parameters developing in complex materials. We expect that these findings can be harvested for a better characterization of ordered phases of matter and in future studies of improper orders of functional materials.
A.S. is grateful for financial support from the Swiss National Science Foundation (SNSF) through Division II (No. 184739). A.R. also acknowledges financial support from the Swiss National Science Foundation (SNSF) through an Ambizione Grant No. 186043. |
2303.18108 | A Ramsey apparatus for proton spins in flowing water | We present an apparatus that applies Ramsey's method of separated oscillatory
fields to proton spins in water molecules. The setup consists of a water
circuit, a spin polarizer, a magnetically shielded interaction region with
various radio frequency elements, and a nuclear magnetic resonance system to
measure the spin polarization. We show that this apparatus can be used for Rabi
resonance measurements and to investigate magnetic and pseudomagnetic field
effects in Ramsey-type precision measurements with a sensitivity below 100 pT. | Ivo Schulthess, Anastasio Fratangelo, Patrick Hautle, Philipp Heil, Gjon Markaj, Marc Persoz, Ciro Pistillo, Jacob Thorne, Florian M. Piegsa | 2023-03-31T14:56:02Z | http://arxiv.org/abs/2303.18108v2 | # A Ramsey apparatus for proton spins in flowing water
###### Abstract
We present an apparatus that applies Ramsey's method of separated oscillatory fields to proton spins in water molecules. The setup consists of a water circuit, a spin polarizer, a magnetically shielded interaction region with various radio frequency elements, and a nuclear magnetic resonance system to measure the spin polarization. We show that this apparatus can be used for Rabi resonance measurements and to investigate magnetic and pseudomagnetic field effects in Ramsey-type precision measurements with a sensitivity below 100 pT.
## I Introduction
The nuclear magnetic resonance method of Rabi [1; 2] and Ramsey's technique of separated oscillatory fields [3; 4; 5] have been applied very successfully in a variety of different scientific experiments. They apply constant and time varying magnetic fields to manipulate the spins of probe particles.
Ramsey's technique allows to determine the Larmor precession frequency of the spin in a magnetic field \(B_{0}\). In a first step, the spin polarized particles are flipped by an oscillating field \(B_{1}\) into the plane orthogonal to \(B_{0}\). Then, they can precess for a certain time until they are flipped again by a second oscillating \(B_{1}\) field. Usually, the frequency of the oscillating fields is scanned around the resonance while the phases of the two signals are locked. This results in an interference pattern of the spin polarization in the frequency domain. Ramsey's technique can be applied to precisely measure changes in magnetic and pseudo-magnetic fields. It is used in atomic clocks [6; 7], to measure the Newtonian gravitational constant [8], to search for the neutron electric dipole moment [9; 10; 11], to search for dark matter [12; 13], new particles and interactions [14], and others. It was also applied in the measurement of the neutron magnetic moment [15]. In the latter experiment, the technique served to compare resonance frequencies of free neutrons and protons in water passing through one apparatus. The application of resonance techniques with flowing water had been previously demonstrated by Sherman [16].
## II Apparatus
Here we present an experimental apparatus with a similar concept as the one used in the neutron magnetic moment measurement. Figure 1 shows a schematic of the setup and Fig. 2 a photo of the full tabletop experiment. The total length is about 3 meters. The water is circulated through the system using a gear pump.
First, the water passes a polarizer to create a sizable spin polarization of the protons. It then flows through the interaction region, which is magnetically shielded to the surrounding by mu-metal. In that region, the spins interact with the magnetic field \(B_{0}\) and can be manipulated with spin-flip coils. There are additional temperature and magnetic field sensors. Finally, the spin polarization is measured and analyzed employing nuclear magnetic resonance (NMR) techniques. No guiding fields for the proton spins are required between the elements since their fringe fields are sufficient.
### Water Circuit
To perform a spin precession experiment, the demineralized water that contains the hydrogen protons is circulating in a water circuit. We use a rigid glass capillary with an inner diameter of \(d=4\) mm and a length of 1500 mm to guide the water through the interaction region. To connect the other elements we use plastic tubes (PU, PVC, and PTFE) of various diameters. We use a
Figure 1: Schematic of the experimental setup where the protons in water (H\({}_{2}\)O) are pumped from the water reservoir of the chiller. They are first polarized in a polarizer (red) and then enter the interaction region surrounded by a double-layer mu-metal shield. Two spin-flip coils are shown in green and the magnetic field direction is indicated in blue. The spin polarization is analyzed using a nuclear magnetic resonance (NMR) system (purple). The schematic is not to scale. |
2309.13760 | The effect of Jupiter on the CAI storage problem | By studying the distribution of calcium-aluminium-rich inclusions (CAIs) that
are embedded within meteorites, we can learn about the dynamical history of the
protoplanetary disk from which our Solar System formed. A long-standing problem
concerning CAIs is the CAI storage problem. CAIs are thought to have formed at
high temperatures near the Sun, but they are primarily found in carbonaceous
chondrites, which formed much further out, beyond the orbit of Jupiter.
Additionally, radial drift of CAI particles should have removed them from the
solar protoplanetary disk several million years before the parent bodies of
meteorites in which they are encountered would have accreted. We revisit a
previously suggested solution to the CAI storage problem by Desch, Kalyaan, and
Alexander which proposed that CAIs were mixed radially outward through the disk
and subsequently got trapped in a pressure maximum created by Jupiter's growing
core opening a planet gap. Our aim is to investigate whether their solution
still works when we take into account the infall phase during which the disk
builds up from the collapse of a molecular cloud core. We build a 1D numerical
code in Python using the DISKLAB package to simulate the evolution of the solar
protoplanetary disk, starting with a collapsing molecular cloud. We find that
outward transport of CAIs during the infall phase is very efficient, possibly
mixing them all the way into the far outer disk. Subsequent inward radial drift
collects CAIs in the pressure maximum beyond Jupiter's orbit while draining the
inner disk, roughly reproducing parts of the result by Desch et al. By
introducing CAI formation so early, abundances out to 100 AU remain
significant, possibly not consistent with some meteoritic data. It is possible
to create a disk that does not expand as far out and also does not push CAIs as
far out by using a very slowly rotating cloud. | Stefan Jongejan, Carsten Dominik, Cornelis Dullemond | 2023-09-24T21:41:35Z | http://arxiv.org/abs/2309.13760v1 | # The effect of Jupiter on the CAI storage problem
###### Abstract
Context:Meteorites preserve an imprint of conditions in the early Solar System. By studying the distribution of calcium-aluminium-rich inclusions (CAIs) that are embedded within meteorites, we can learn about the dynamical history of the protoplanetary disk from which our Solar System formed. A long-standing problem concerning CAIs is the CAI storage problem. CAIs are thought to have formed at high temperatures near the Sun, but they are primarily found in carbonaceous chondrites, which formed much further out, beyond the orbit of Jupiter. Additionally, radial drift of CAI particles should have removed them from the solar protoplanetary disk several million years before the parent bodies of meteorites in which they are encountered would have accreted.
Aims:We revisit a previously suggested solution to the CAI storage problem by Desch, Kalyan, and Alexander which proposed that CAIs were mixed radially outward through the disk and subsequently got trapped in a pressure maximum created by Jupiter's growing core opening a planet gap. Our aim is to investigate whether their solution still works when we take into account the infall phase during which the disk builds up from the collapse of a molecular cloud core.
Methods:We build a 1D numerical code in Python using the DISKLAB package to simulate the evolution of the solar protoplanetary disk, starting with a collapsing molecular cloud. CAIs are created in the model by thermal processing of solar nebula composition dust, and subsequently transported through the disk by turbulent diffusion, radial drift and advection by the gas.
Results:We find that outward transport of CAIs during the infall phase is very efficient, possibly mixing them all the way into the far outer disk. Subsequent inward radial drift collects CAIs in the pressure maximum beyond Jupiter's orbit while draining the inner disk, roughly reproducing parts of the result by Desch et al. By introducing CAI formation so early, abundances out to 100 AU remain significant, possibly not consistent with some meteoritic data. It is possible to create a disk that does not expand as far out and also does not push CAIs as far out by using a very slowly rotating cloud.
Conclusions:
## 1 Introduction
A wealth of information about the conditions in the early Solar System can be obtained through the study of meteorites. These celestial objects are thought to have undergone little change since their formation over 4.5 billion years ago, and as such they preserve an imprint of the conditions in the solar protoplanetary disk. The distribution of calcium-aluminium-rich inclusions (CAIs) in present-day meteorites is one such piece of information that can tell us something about the dynamical history of our Solar System. These structures were not expected to survive long enough to end up in any meteorite parent bodies at all, yet somehow they must have survived the harsh conditions at the time of their birth for long enough to do so. To add to this mystery, they are also predominantly found in locations far away from the Sun, in whose vicinity they must have originally formed. These two issues together constitute the CAI storage problem. In this paper we present the results of a numerical model of the early Solar System, which attempts to explain the observed distribution of CAIs in meteorites. Our model builds on the solution previously proposed by Desch et al. (2018). That paper attempts to construct a detailed, fine-tuned model for the formation and outward mixing of CAIs, with the goal to fit and address many aspects of the record that is available to us in the form of meteoritic data. While meteoritic data samples limited regions of the Solar System, it is very detailed and produces an important guide on how to model processes in the young Solar System. Our goal in the present study is not to attempt a reconstruction of the Solar System and the meteoritic record on a similar level of detail as Desch et al. Instead, we perform a calculation with similar assumptions about physical processes in the disk, in particular with similar assumptions about the disk viscosity and the CAI-forming processes, but start the computation earlier by computing the effects of the infall phase from a molecular cloud core. We show that the infall phase can significantly influence the "starting conditions" in the final phase of the circumstellar disk in which the Desch model is anchored.
In this introduction we start by giving a brief overview about the different kinds of meteorites encountered. We then describe CAIs and the CAI storage problem in more detail, before discussing some of the work previously done on this topic.
Meteorites are an important tool for understanding the history of the Solar System and planet formation mechanisms, since they preserve information about the conditions in the solar protoplanetary disk, such as the abundances of their constituent elements, isotope ratios, or temperatures they must have been subjected to. The classification of meteorites into different classes, clans, groups, and subgroups can be a murky topic upon which there is no universal agreement. Here we only present a simplified overview of the different kinds of meteorites. An extensive
modern overview of meteorite classification can for example be found in Weisberg et al (2006).
Broadly speaking, we can distinguish two types of meteorites: achondrites, which are composed of material that underwent melting or differentiation inside a massive parent body, and chondrites, which did not undergo such processes. Using this definition, achondrites encompass not only rocky achondrites, which represent crustal material, but also iron meteorites, which are composed of an iron-nickel alloy thought to come from the core of a differentiated parent object. An intermediate case exists in the form of pallasites, which consist of silicates embedded within an iron matrix, appearing to have formed from a mix of core as well as mantle or crustal material. Another category of achondrite are the primitive achondrites, which only underwent partial melting (making them more closely related to chondrites), or which perhaps did melt, but without solidifying in crystalline form as a result of it. Other than achondrites originating from parent asteroids or comets, a small number of finds are thought to actually have originated on Mars or the Moon.
Chondrites make up the majority of the known meteorite population. Because chondrites did not undergo melting after accreting into a parent body, they are the most primitive kind of meteorite, most closely preserving an imprint of the conditions in the solar protoplanetary disk. There are three main classes of chondrites: ordinary chondrites (OCs), which are the most common type, enstatite chondrites (ECs), which are rich in the mineral enstatite, and carbonaceous chondrites (CCs), which have relatively high carbon abundances. Some smaller classes also exist, as well as ungrouped individual meteorites that might represent the first kind of an entirely new class. Each of the main chondrite classes can be further subdivided into various groups, which are generally thought to sample separate meteorite parent bodies. A defining feature of chondrites is that chondrules are found embedded within them. Chondrules are millimeter-sized spherical droplets of igneous rocks that must have melted at high temperatures before they ended up inside their parent meteorites. This is a first hint that material that was processed at high temperatures must somehow have been transported out to colder regions of the solar protoplanetary disk (Brownlee et al. 2006).
The formation time of chondrite parent bodies can be determined indirectly in several ways. Radiometric dating of components such as chondrules, which must have formed before the parent body itself, provides upper limits on the age of the parent body. Similarly, lower limits can be determined through radiometric dating of minerals that are known to have formed only after the parent body itself accreted. Finally, estimates of the maximum temperature reached by the parent body provide model-dependent limits on the accretion time. Using these constraints, enstatite chondrite parent bodies are estimated to have formed first, around 1.8 Myr after the formation of the Solar System, followed by the ordinary chondrite parent bodies after around 2.1 Myr. Carbonaceous chondrite parent bodies formed last. The accretion time of the parent bodies for the different groups the CCs are subdivided in is more spread out over time, but ranges from 2.4 to 4.1 Myr after the formation of the Solar System (Desch et al. 2018). Constraints on meteorite ages also make it possible to put constraints on the timing of planet formation processes.
Different kinds of chondrites can be spectroscopically matched to certain groups of asteroids that are found in the asteroid belt today. This way, it can also be determined roughly what heliocentric distance these meteorites originated from. Asteroids associated with enstatite chondrites are found closest to the Sun, around 2 AU. Those linked to ordinary chondrites dominate the region between 2 and 2.5 AU, while the asteroids similar to carbonaceous chondrites are commonly found between 2.5 and 4 AU (Desch et al. 2018). While this does not necessarily mean that the meteorite parent bodies also originally accreted at these locations, at least this ordering of chondrite types by current heliocentric distance seems to match that of their original formation locations. Evidence for this is for example provided by their water content. While enstatite chondrites contain virtually no water, ordinary chondrites do contain small amounts, and also show evidence for aqueous alteration. Carbonaceous chondrites on the other hand are water-rich (Alexander et al. 2012). Since condensation of water-rich minerals is only possible in the relatively cool region of the disk beyond the snow line, this indicates that carbonaceous chondrites formed relatively far from the Sun.
Warren (2011) discovered that meteorites can be divided into two distinct clusters based on their isotopic abundances. Carbonaceous chondrites are enriched in certain neutron-rich isotopes, such as \({}^{50}\)Ti or \({}^{54}\)Cr, whereas ordinary and enstatite chondrites, as well as most achondrites (collectively referred to as the non-carbonaceous or NC meteorites), are relatively depleted in these isotopes. This NC-CC dichotomy implies that the two clusters of meteorites formed from two isotopically distinct reservoirs of material. Furthermore, because the parent bodies of the meteorites in which this dichotomy is observed are known to have accreted several million years apart, these two reservoirs must also have coexisted for several million years without significant mixing of material between them. A likely explanation for the separation of the two reservoirs is the formation of Jupiter, which opened a planet gap in the solar protoplanetary disk that prevented material exchange between the reservoirs (Kruijer et al. 2017). A good candidate to explain where the isotopic differences between the NC and CC reservoirs originated from in the first place is heterogeneities in the Solar System's parent molecular cloud. It is possible this cloud was not homogeneous in composition, or that its composition changed over time. Because infall from the molecular cloud affects different regions of the protoplanetary disk in different ways, it is possible that material containing neutron-rich isotopes was primarily added to the CC reservoir, or that material depleted in these isotopes was added primarily to the NC reservoir (Nname et al. 2019). Indeed, the presence of daughter isotopes of \({}^{26}\)Al in meteorites, a radioactive isotope with a half-life of only 0.72 Myr, is evidence that the Solar System's molecular cloud must have been recently polluted with radioactive isotopes, possibly originating from a supernova or Wolf-Rayet stellar winds, something that could explain heterogeneities in either space or time.
The present-day coexistence of asteroids linked to both NC and CC meteorites in the asteroid belt can be explained as a result of Jupiter's migration after the asteroid parent bodies accreted, first in- and then outward. This would scatter the different asteroid populations, after which they ended up together in the same asteroid belt (Walsh et al. 2011).
Calcium-aluminium-rich inclusions, or CAIs, are small solid structures that are found as inclusions in certain types of meteorites. Often irregularly shaped, they range in size from micrometers to about a centimeter. They are composed of minerals such as corundum (aluminium oxide) or perovskite (calcium titanium oxide), which are enriched in refractory elements such as calcium and aluminium, and which condense at high temperatures (T \(\approx\) 1400 K), implying that CAIs formed near the Sun where such temperatures would be achieved (Grossman 1972). CAIs are also the oldest dated solids, with a mean age of approximately 4567.30 \(\pm\) 0.16 Myr (Connelly et al. 2012). Because of this, and because their constituent minerals are predicted to be
the first to condense in a cooling gas of solar nebula composition, the age of CAIs is often equated with the age of the Solar System itself. The time of (initial) CAI formation is therefore also the time relative to which the accretion times of meteorite parent bodies were defined previously.
CAIs are primarily found embedded in carbonaceous chondrites, in which they typically make up a couple percent of the total volume. In ordinary and enstatite chondrites on the other hand, CAI abundances are easily an order of magnitude lower, typically less than a tenth of a percent of the total volume. This distribution presents us with two serious problems that together make up the CAI storage problem.
The first problem is that CAIs are thought to have formed at high temperatures, which are only obtained close to the Sun. As we have seen however, the carbonaceous chondrites in which they ended up formed further out in the solar protoplanetary disk, likely beyond the orbit of Jupiter in the CC reservoir. Clearly CAIs must have been transported outward through the disk to end up there, but this raises the question why did they not also end up in ordinary and enstatite chondrites, which formed at intermediate heliocentric distances.
The second problem is related to the accretion time of carbonaceous chondrite parent bodies. Because the gas in a protoplanetary disk normally experiences an outward pressure-gradient force which partially supports it against gravitational collapse, its orbital velocity can be slightly less than Keplerian. Solid particles do not experience such a pressure-gradient force, and hence have to orbit the central star at the Keplerian velocity. The velocity difference between gas and dust leads to a drag force on the dust particles1 that robs them of angular momentum and causes them to slowly drift inward, spiralling towards the star over time. This is what is called radial drift of dust particles (Weidenschilling 1977). For particles the size of large CAIs (in which most of the CAI mass is contained) that form in the inner disk near the Sun, this radial drift velocity is high enough that they are expected to all vanish into the Sun on a time scale of order \(10^{4}\) years. This is evidently much shorter than the time CAIs would need to remain in the disk in order to be incorporated into carbonaceous chondrite parent bodies, the last of which did not accrete until over 4 Myr after CAI formation.
Footnote 1: Technically, smaller dust grains are so well coupled to the gas that they simply get dragged along with the same orbital velocity. But because this velocity is sub-Keplerian, they will drift inward all the same.
The CAI storage problem can therefore be summarized as the question how CAIs managed to survive long enough in the disk to end up in any meteorites in the first place, and why they then ended up preferentially in carbonaceous chondrites, which of all meteorite types formed the furthest away from the location where CAIs themselves formed.
This problem is closely related to the problem of the NC-CC dichotomy. CAIs are enriched in many of the elements that are also found to be enriched in the CC reservoir in general. However, the NC-CC dichotomy has been found to also extend to elements which aren't present in CAIs, such as nickel (Nanne et al. 2019). This means that simply adding CAIs to a reservoir of otherwise NC composition does not lead to the CC reservoir.
Over time, a number of mechanisms have been proposed that could explain how CAIs and other dust species can be transported outward in a protoplanetary disk. Cuzzi et al (2003) showed that inward radial drift of CAIs can be overcome by turbulent diffusion, a process in which turbulent motions of the gas within a protoplanetary disk redistribute solid particles by advection, at least part of which would be in the outward direction. This allows CAIs to survive in the disk on time scales of order \(10^{6}\) years. Keller and Gail (2003) performed both calculations and numerical simulations that showed that while the accretion stream in a protoplanetary disk is normally pointed inward, flows near the disk midplane can actually point outward, thereby providing another mechanism of radial outward transport of material called meridional flow. Boss et al (2012) showed that the gravitational instability, an instability that occurs when a disk becomes so massive that its own self-gravity can no longer be neglected, can lead to a rapid redistribution of mass both in- and outward.
Cuzzi et al (2003) also proposed the CAI Factory model, which is a simplified picture of the CAI formation environment. The CAI Factory consists of a region at a nearly constant temperature, where the minerals CAIs are composed of can exist as solids, but which is too hot to allow for other silicates, such as chondrules, in solid form. The radial extent of the CAI Factory changes over time, the outer boundary moving inward as the disk cools.
Desch, Kalyaan, and Alexander (2018) proposed a solution to the CAI storage problem, to which we will hereafter simply refer as the DKA18 model. They constructed a 1D hydrodynamics code that builds on Cuzzi's CAI Factory model as well as the suggestion of Kruijer et al (2017) that a planet gap opened by Jupiter prevented material exchange between the regions interior and exterior to the gap, previously mentioned in the context of the NC-CC dichotomy. Their simulation begins at \(t=0\) with a disk that is seeded with dust of solar nebula composition. Viscous heating creates a region near the Sun, the CAI Factory, in which the temperature reaches 1400 K. Solar nebula composition dust that enters this region is thermally processed, and a certain fraction of it is instantaneously converted into CAIs of one particular size. This conversion of dust continues for several \(10^{5}\) years. In the meantime, the disk is viscously evolving. CAIs are transported out of the CAI Factory, both into the Sun and outward into the disk by the effects of turbulent diffusion and meridional flow. A small part of the CAIs diffused past a heliocentric distance of 3 AU in this way, where Jupiter's core of 30 \(M_{\oplus}\) is then assumed to form 0.6 Myr into the simulation. As the planet grows to its full size over the course of the next 4 Myr by accreting gas from its surroundings, it opens up a gap in the disk, where the surface density of the gas is significantly reduced. As a result of this, there exists a region just beyond the planet location where the gas density necessarily increases again in the outward direction. This means that the pressure-gradient force here points inward instead of outward as it normally does, and that gas in this region will be orbiting the Sun with super-Keplerian velocities in order to balance itself against gravitational collapse. This also reverses the sign of dust radial drift, causing CAIs to be transported outward. Some distance beyond the planet, the gas surface density and pressure reach a maximum before continuing their normal outward decrease. In this pressure bump, the gas orbits with exactly the Keplerian velocity, removing any drag force on solid particles and therefore the conditions for dust radial drift. This thus creates a situation in which CAIs in the vicinity always drift in the direction of the pressure bump, at which location they can remain in a stable orbit for millions of years, until the moment they are incorporated into the accreting meteorite parent bodies. In the meantime, CAIs in the inner disk continue to drift into the Sun unimpeded, depleting the formation location of ordinary and enstatite chondrites. At the end of the simulation, the DKA18 model predicts that all remaining CAIs in the disk are concentrated around the pressure bump behind
Jupiter's orbit, where their abundance peaks at about 6% of all solids.2
Footnote 2: Other than just calculating the abundances of CAIs and refractory elements, Desch, Kalyaan, & Alexander also demonstrate that disk conditions consistent with the formation of chondrites matching those abundances as well as properties such as water content emerge from contextual evidence such as \({}^{54}\)Cr anomalies and radiometric dating. They also calculate the particle size concentrated by turbulence in each chondrite’s formation location and find an excellent match with observed chondrule sizes.
While it seems that the DKA18 model conclusively solves the CAI storage problem, there are some issues with it that could have a significant impact on the results. The most important of these issues is that the model starts with a fully formed star plus disk system, within which CAI formation is then initiated. The build-up of the solar protoplanetary disk from a parent molecular cloud core is neglected. It is therefore unclear what effect the infall phase would have on the timing of CAI formation, their dynamical evolution and final abundance profile, or indeed whether the solution of Jupiter keeping CAIs in place would still work in the first place. While Yang and Ciesla (2012) did perform disk simulations in which the effects of the infall phase were taken into account, showing that CAIs that formed during the infall phase could be preserved in primitive bodies that accreted in the outer disk, they did not address the question why CAIs would be depleted in the inner disk.
## 2 Methods
### Disklab
Our model was programmed in Python using DISKLAB, a package developed by Cornelis Dullemond and Til Birnstiel3, which contains many basic functions for setting up disk models, calculating basic quantities such as surface densities and temperatures, and evolving these models over time. While DISKLAB contains methods for the vertical structure of disks and for making full two-dimensional models, only one-dimensional radial (and hence rotationally symmetric) models were used for this project.
Footnote 3: [https://github.com/dullemond/DISKLAB](https://github.com/dullemond/DISKLAB) – User ID: dullemond & Birnstiel - Access granted upon request
Setting up any model in DISKLAB begins by calling the DiskRadialModel class, which sets up a grid of discrete radial coordinates at which disk quantities will be calculated, as well as basic stellar parameters such as mass and luminosity, all of which can be freely modified. A surface density profile for the disk gas can then be set by hand, but usually the easier approach is to begin with a basic model such as a powerlaw disk and then modify it according to your wishes.
In addition to the (main) gas component of the disk, any number of solid components (dust species) can be added to the model. This is done by specifying a certain grain size (or alternatively, a Stokes number) and grain density, as well as the surface density for that component, which is usually the gas surface density multiplied by some dust-to-gas ratio. Dust grains of different sizes would have to be added to the model as separate components, even if they represent dust of the same chemical composition. It is not possible to follow individual dust grains through the disk with this method.
An important property of a protoplanetary disk is its (mid-plane) temperature, since this directly influences the isothermal speed,
\[c_{s}=\sqrt{\frac{k_{B}T}{\mu m_{p}}}, \tag{1}\]
(with \(T\) the temperature, \(k_{B}\) the Boltzmann constant, \(\mu\) the mean molecular weight and \(m_{p}\) the proton mass) and hence also important parameters such as the viscosity \(\nu\) through
\[\nu=\alpha c_{s}h. \tag{2}\]
Here the Shakura-Sunyaev \(\alpha\)-parameter quantifies the strength of the angular momentum transport due to turbulence (Shakura & Sunyaev 1973), and \(h\) is the vertical scale height of the disk. The midplane temperature due to stellar irradiation \(T_{\rm irr}\) is calculated separately from the temperature due to viscous heating \(T_{\rm visc}\). The two temperatures are then combined as
\[T=\left(T_{\rm irr}^{4}+T_{\rm visc}^{4}\right)^{1/4}. \tag{3}\]
The irradiation temperature itself is calculated by equating the heating rate due to irradiation by the central star \(Q_{\rm in}(r)\) with the cooling rate \(Q_{\rm cool}(r)\):
\[Q_{\rm irr}(r)=2\phi(r)\frac{L_{*}}{4\pi r^{2}}, \tag{4}\]
\[Q_{\rm cool}(r)=2\sigma_{\rm SB}T_{\rm eff}(r)^{4}, \tag{5}\]
with \(\phi(r)\) the flaring angle of the disk, \(L_{*}\) the stellar luminosity, \(\sigma_{\rm SB}\) the Stefan-Boltzmann constant and \(T_{\rm eff}\) the effective temperature at the surface of the disk. This surface temperature is related to the midplane temperature \(T_{\rm irr}\) as
\[T_{\rm irr}=\left(\frac{T_{\rm eff}^{4}}{2}\right)^{1/4}, \tag{6}\]
where the factor 2 comes from the fact that of all the stellar radiation intercepted at the disk surface, only half is re-emitted into the disk where it can heat up the midplane. (Chiang & Goldreich 1999) (Dullemond et al. 2001) Combining Equations (4), (5) and (6) then leads to a final expression for the midplane temperature due to irradiation:
\[T_{\rm irr}=\left(\frac{\phi(r)}{2\sigma_{\rm SB}}\frac{L_{*}}{4\pi r^{2}} \right)^{1/4}. \tag{7}\]
Similarly, the viscous temperature \(T_{\rm visc}\) can be found by equating the viscous heating rate \(Q_{\rm visc}\) with the cooling rate \(Q_{\rm cool}\):
\[Q_{\rm visc}(r)=\frac{9}{4}\Sigma_{g}(r)\nu(r)\Omega_{K}(r)^{2}, \tag{8}\]
\[Q_{\rm cool}(r)=2\sigma_{\rm SB}T_{\rm eff}(r)^{4}\left(1-e^{-2\tau_{\rm Ross}} \right), \tag{9}\]
where \(\Sigma_{g}\) is the gas surface density, \(\nu\) the viscosity, \(\Omega_{K}=\sqrt{GM_{*}/r^{3}}\) the Kepler frequency and \(\tau_{\rm Ross}\) the Rosseland optical depth, which depends on the dust surface density \(\Sigma_{d}\) and Rosseland opacity \(\kappa_{d,\rm Ross}\) as
\[\tau_{\rm Ross}=\Sigma_{d}\kappa_{d,\rm Ross}. \tag{10}\]
The relation used between the effective (surface) temperature \(T_{\rm eff}\) and the midplane temperature \(T_{\rm visc}\) is now
\[T_{\rm visc}=\left(\frac{1}{2}\tau_{\rm Ross}+1\right)^{1/4}T_{\rm eff}. \tag{11}\]
Combining Equations (8), (9), (10) and (11) then leads to an expression for the viscous temperature \(T_{\rm visc}\):
\[T_{\rm visc}=\left(\frac{9}{8\sigma_{\rm SB}}\frac{\frac{1}{2}\tau_{\rm Ross}+1}{ 1-e^{-2\tau_{\rm gas}}}\Sigma_{g}(r)r(r)\Omega_{K}(r)^{2}\right)^{1/4}. \tag{12}\]
Because this expression depends on the viscosity \(v\) as well as the Rosseland opacity \(\kappa_{d,\rm Ross}\), which in turn depend on the temperature themselves, iteration is required. DISKLAB uses Brent's method to find the roots of this equation and solve for \(T_{\rm visc}\). Before this can be done however, the Rosseland mean opacities for the dust species, \(\kappa_{d,\rm Ross}\) in Equation (10), have to be specified. Ideally this is done by calculating them from the frequency-dependent opacities of the dust species that can be specified, but it is also possible to read them from a user-provided table of opacities as a function of density and temperature, to use the Bell & Lin opacity model (Bell & Lin 1994), or to simply specify the value to be used at each radial grid point.
Evolving disk models over time is done by solving the time-dependent viscous disk equation:
\[\frac{\partial\Sigma_{g}}{\partial t}-\frac{3}{r}\frac{\partial}{\partial r} \left(\sqrt{r}\frac{\partial(\sqrt{r}\Sigma_{g}v)}{\partial r}\right)=\dot{ \Sigma}_{g}, \tag{13}\]
where \(\Sigma_{g}\) is the gas surface density, \(v\) is the viscosity and \(\Sigma_{g}\) is a source term for the gas surface density that could correspond to for example infall or photoevaporation. This is a diffusion equation, which DISKLAB can solve using an implicit integration scheme. A consequence of this is that the solutions should be fairly accurate even for large time steps, while this would lead to large errors using an explicit method. It does remain true that multiple smaller time steps lead to more accurate results than a single large one. By default, the boundary condition for Equation (13) is that the gradient of \(\Sigma_{g}\) vanishes at the inner boundary, though this can be changed to instead set it to some custom value.
The evolution of dust components is handled separately. The dynamics of a dust particle are determined by its frictional stopping time \(\tau_{\rm stop}\), caused by the aerodynamic drag on the particle, and the related Stokes number, which is a dimensionless constant describing how well the dust is coupled to the gas in the disk. The expression for this stopping time first of all depends on the mean free path of gas molecules \(\lambda_{\rm mfp}\), which is calculated as
\[\lambda_{\rm mfp}=\frac{1}{n_{\rm gas}\sigma_{H_{2}}}, \tag{14}\]
with \(\sigma_{H_{2}}=2\cdot 10^{-15}\) cm\({}^{2}\) and the gas number density \(n_{\rm gas}\)
\[n_{\rm gas}=\frac{\rho_{\rm gas}}{\mu m_{p}}, \tag{15}\]
where \(\rho_{\rm gas}\) is the gas density, \(\mu=2.3\) is the mean molecular weight and \(m_{p}\) the proton mass. There are two different physical regimes for the drag force on a solid particle, depending on the grain size \(a\) compared to the mean free path \(\lambda_{\rm mfp}\). In the Epstein regime, which holds when \(a<9/4\lambda_{\rm mfp}\), the grain size is small compared to the mean free path, so the fluid is basically a collisionless collection of molecules following a Maxwell velocity distribution. In this regime, the stopping time takes the form
\[\tau_{\rm stop}=\frac{\xi a}{\rho_{\rm gas}v_{\rm th}}, \tag{16}\]
where \(\xi\) is the (interior) grain density and \(v_{\rm th}\), the thermal velocity of the gas particles, is given by
\[v_{\rm th}=\sqrt{\frac{8k_{B}T}{\pi\mu m_{p}}}=\sqrt{\frac{8}{\pi}}c_{s}. \tag{17}\]
In the Stokes regime, which holds when \(a\geq 9/4\lambda_{\rm mfp}\), the dust particles are relatively large, and the gas flows around them as a fluid. In this regime, the precise form of the stopping time depends on the Reynolds number:
\[{\rm Re}=\frac{2a\Delta v}{v_{\rm mol}}, \tag{18}\]
where \(\Delta v\) is the velocity difference between the gas and the dust in the disk, and the molecular viscosity \(v_{\rm mol}\) is given by
\[v_{\rm mol}=\frac{1}{2}v_{\rm th}\lambda_{\rm mfp}. \tag{19}\]
When \({\rm Re}<1\): (Birnstiel et al. 2010)
\[\tau_{\rm stop}=\frac{2\xi a^{2}}{9v_{\rm mol}\rho_{\rm gas}}. \tag{20}\]
When \(1<{\rm Re}<800\): (Perets & Murray-Clay 2011)
\[\tau_{\rm stop}=\frac{8\xi a}{3C_{D}\rho_{\rm gas}\Delta v}, \tag{21}\]
where the drag coefficient \(C_{D}\) is given by
\[C_{D}=\frac{24}{{\rm Re}}\left(1+0.27{\rm Re}\right)^{0.43}+0.47\left(1-e^{-0. 04{\rm Re}^{\lambda_{\rm th}}}\right). \tag{22}\]
When \({\rm Re}>800\): (Birnstiel et al. 2010)
\[\tau_{\rm stop}=\frac{6\xi a}{\rho_{\rm gas}\Delta v}. \tag{23}\]
Regardless of which form for the stopping time applies to some particle, the Stokes number is then calculated as
\[{\rm St}=\Omega_{K}\tau_{\rm stop}. \tag{24}\]
With the Stokes number determined, we can then see how the dust evolves over time. First of all, as a result of the drag force on the dust caused by the velocity difference \(\Delta v\) with the gas, the dust undergoes radial drift with a velocity \(v_{d}\) given by
\[v_{d}=\frac{1}{1+{\rm St}^{2}}\left(v_{r}+{\rm St}\frac{c_{s}^{2}}{\Omega_{K} r}\frac{d{\rm ln}p}{d{\rm ln}r}\right), \tag{25}\]
where the last term represents the pressure gradient in the gas, and the radial velocity of the gas itself \(v_{r}\) is given by
\[v_{r}=-\frac{3}{\sqrt{\tau}\Sigma_{g}}\frac{\partial(\sqrt{\tau}\Sigma_{g}v)}{ \partial r}. \tag{26}\]
The full time-dependent evolution of the dust includes both this radial drift and mixing due to the turbulent motions in the gas, which drags the dust along:
\[\frac{\partial\Sigma_{d}}{\partial t}+\frac{1}{r}\frac{\partial\left(r\Sigma _{d}v_{d}\right)}{\partial r}-\frac{1}{r}\frac{\partial}{\partial r}\left(rD_ {d}\Sigma_{g}\frac{\partial}{\partial r}\left(\frac{\Sigma_{d}}{\Sigma_{g}} \right)\right)=\dot{\Sigma}_{d}. \tag{27}\]
Here \(\Sigma_{d}\) is the dust surface density and \(\dot{\Sigma}_{d}\) is a source term, analogously to Equation (13) for the gas. \(D_{d}\) is a diffusion coefficient given by
\[D_{d}=\frac{1}{\mathrm{Sc}}\frac{1}{1+\mathrm{St}^{2}}v, \tag{28}\]
where the Schmidt number \(\mathrm{Sc}\) is defined as
\[\mathrm{Sc}=\frac{v}{D_{g}}, \tag{29}\]
where \(D_{g}\) is the gas turbulent diffusivity. Just as Equation (13), Equation (27) is a diffusion equation that is solved in DISKLAB using implicit integration, which means it should be stable even when larger time steps are used.
Importantly for this project, DISKLAB can also include the effects of infall from a molecular cloud into the model. To this end, it follows Hueso & Guillot (2005) in combining the models of Shu (1977) and Ulrich (1976), which handle the radial infall and the effect of rotation, respectively. This model starts with a molecular cloud core of mass \(M_{c}\) that is assumed to be isothermal with temperature \(T_{c}\), spherically symmetric with radius \(R_{c}\), and rotating as a solid body with rotation rate \(\Omega_{c}\). This cloud then starts to collapse from the inside out as an expansion wave propagates outward with the sound speed \(c_{s}\), causing every shell of mass it passes through to collapse onto the star+disk system. The mass accretion rate is assumed to remain constant during this phase:
\[\dot{M}=0.975\frac{c_{s}^{3}}{G}, \tag{30}\]
with \(\mathrm{G}\) the gravitational constant. Material falling onto the disk this way always ends up within the centrifugal radius \(r_{c}(t)\), which is given by
\[r_{c}(t)=\frac{r_{\mathrm{shell}}(t)^{4}\omega(r_{\mathrm{shell}})^{2}}{GM(t)}, \tag{31}\]
where \(r_{\mathrm{shell}}(t)\) is the distance from the center of the molecular cloud of the shell where the expansion wave passes at time \(t\), \(\omega(r_{\mathrm{shell}})\) the angular momentum contained in that shell, and \(M(t)\) the total mass that has been accreted onto the star+disk from the cloud up until that time. Since both \(r_{\mathrm{shell}}\) and \(M(t)\) are proportional to time (due to the constant sound speed and accretion rate), \(r_{c}\propto t^{3}\) and infalling matter ends up progressively further away from the star. The way this matter is spread out over the disk is then
\[\dot{\Sigma}_{g}(r,t)=\frac{\dot{M}}{\pi r_{c}^{2}}\frac{1}{8}\left(\frac{r}{ r_{c}}\right)^{-3/2}\left[1-\left(\frac{r}{r_{c}}\right)^{1/2}\right]^{-1/2}. \tag{32}\]
This is the \(\dot{\Sigma}_{g}\) that enters Equation (13) for the time evolution of the gas as the source term. The source term for the dust, \(\dot{\Sigma}_{d}\) in Equation (27), can then simply be found by multiplying \(\dot{\Sigma}_{g}\) by the dust-to-gas ratio.
Especially in a model that includes infall, it is important to ensure that the disk does not become so massive that its own self-gravity starts to play a role, leading to the gravitational instability. This would produce non-axially symmetric effects (spiral waves) in the disk, which can't be properly treated with the axially symmetric models DISKLAB produces. The stability of the disk against this phenomenon can be checked in DISKLAB with a simple command that calculates the Toomre \(Q\) parameter (Toomre 1964) as
\[Q=\frac{c_{s}\Omega_{K}}{\pi G\Sigma_{g}}. \tag{33}\]
The disk remains stable as long as \(Q>2\). Under this condition, pressure forces and shear can act faster to destroy overdensities than self-gravity can produce them.
In practice, operations in DISKLAB are performed by applying functions to the DiskRadialModel object that is created at the start. This way, many individual calculations can be performed using only a single line of code. It is then up to the user to combine the different functionalities into a sensible model. Evolving a model over time can be done using a for-loop, where each iteration corresponds to the next time step, and within which the individual functions are called to solve the viscous disk equation and update other time-dependent parameters such as temperatures, velocities and masses. Because any parameter that can vary with radius in the disk is stored as a Python array, with each entry corresponding to one point on the chosen radial grid, they can also easily be manipulated by hand, using standard Python commands. It is for example possible to simulate a chemical reaction in which one type of dust is transformed into another by removing surface density from the array for the first dust species and adding it to that of the second.
Only minor changes were made to the standard DISKLAB code itself. Two new functions were added to introduce a planet and open a gap in the disk, in addition to the existing gap models in the package. These will be described in the next section. The radial velocity of the dust was also set to equal that of the gas at the innermost three grid points only, overriding Equation (25) there, due to a problem with a boundary condition that will also be described in the next section.
### Model setup
The model was calculated on a 1D grid of 1000 logarithmically spaced radial points between 0.06 and 1000 AU. The inner edge of the disk \(r_{\mathrm{in}}\) at 0.06 AU is the same as used in the DKA\(18\) model, and it seems a not unreasonable estimate for the radius of the still contracting proto-Sun.
The basic disk model used was the model by Lynden-Bell & Pringle (1974) as described in Lodato et al (2017). But while this model is present in the code and evolving at the earliest time steps, we are really interested in letting the disk build up naturally due to the effects of infall. The initial mass was therefore chosen to have a negligibly low value, \(M_{0}=10^{-20}M_{\odot}\), which is about 19 orders of magnitude lower than the final disk mass. In practice then, it is not relevant what happens with this initial model at early times, because the material infalling from the molecular cloud core dominates the evolution of the disk as soon as the centrifugal radius of the infall \(r_{c}\) exceeds the inner edge of the disk.
The way infalling matter is spread over the disk, described by Equation (32), depends solely on the properties of the molecular cloud core. The three relevant parameters are the cloud mass \(M_{c}\), temperature \(T_{c}\) and rotation rate \(\Omega_{c}\).4 For the cloud mass, a value of \(M_{c}=1.05\)\(\mathrm{M}_{\odot}\) was chosen in order for the Sun to end up with roughly 1 \(\mathrm{M}_{\odot}\). The cloud temperature was set to \(T_{c}=14\)
K, which is a typical temperature for a molecular cloud (Wilson et al. 1997). The choice for the rotation rate, \(\Omega_{c}=2.3\cdot 10^{-14}\) rad s\({}^{-1}\), is more or less arbitrary, but the effect of varying this parameter will be explored later. With these choices for mass and temperature, the duration of the infall phase can be calculated by dividing the cloud mass by the accretion rate in Equation (30):
\[t_{\rm infall}=\frac{M_{c}}{M}=\frac{GM_{c}}{0.975c_{s}^{3}}\approx 0.4\ {\rm Myr}. \tag{34}\]
the effects of the gravitational instability by adopting an artificially enlarged \(\alpha\)-parameter (Armitage et al. 2001) during the infall phase. This redistributes the infalling material much more rapidly, ensuring that most of the gas flowing through the disk actually ends up in the star. It turns out a value of \(\alpha=0.6\) is sufficient to ensure that \(Q>2\) and the total disk mass \(M_{\rm disk}<0.1M_{\odot}\) at all times. This value was therefore used during the entire infall phase, after which \(\alpha\) was changed to follow Equations (36-38). In addition to this, we also ran a simulation with a reduced cloud rotation rate of \(\Omega_{\rm c}=1\times 10^{-15}\), which is low enough that the gravitational instability never triggers, allowing us to use Equations (36-38) from the start.
To solve Equation (13) for the viscous evolution of the gas, a boundary condition needs to be specified. The default zero-gradient boundary condition caused issues with the temperature calculation, so instead we set \(\Sigma_{g}\) to a fixed value at the inner disk edge. A low value of \(\Sigma_{g}=10^{-14}\) g cm\({}^{-2}\) was used for the initial low-mass disk model, which was then increased to \(\Sigma_{g}=10\) g cm\({}^{-2}\) when the centrifugal radius of the infall crossed the disk inner edge, a value similar to the gas surface density predicted for the innermost disk at the end of the simulation. However, using a fixed value can cause problems with radial outflow of dust species at the inner disk edge when \(\Sigma_{g}\) increases in the outward direction, because this creates a pressure gradient that can act as a particle trap in the same way a planet gap does. For this reason, the radial velocity of the dust species was set to equal that of the gas at the innermost three grid points only, which prevents them from getting trapped.
The model is now set up and ready to be evolved over time. The simulation ran for 5 million years, by the end of which CAI parent bodies are thought to have finished their formation. Since DISKLAB uses implicit integration for calculating the time evolution of the model, relatively large time steps could be employed. Unfortunately however, this does not apply to the thermal conversion of the dust species, which had to be done explicitly at each time step. We therefore used a constant time step of 0.2 years during the infall phase, when dust mixing is particularly strong, after which we switched to a time step of 1000 years for the remainder of the simulation. More information on how these time steps were chosen can be found in the Appendix. This way, each individual simulation could be finished in under a day.
Some time after infall had ended and the disk had fully formed, Jupiter's core was assumed to form, start growing and open a gap in the disk. We once again followed the DKA18 model in the way this was incorporated into the simulation. Jupiter's core first appeared in the model when it had reached a mass of \(M_{J}=30~{}M_{\oplus}\). It then started growing to its full size by accreting gas from its surroundings. At every time step \(dt\), an amount of mass was added that is calculated as 6
Footnote 6: This quantity of gas was not actually removed from the disk, however.
\[dM=\frac{dt}{\tau}\int_{r}\Sigma_{g}(r)e^{-x^{2}}2\pi rdr, \tag{39}\]
Figure 1: Result of a simulation in which the parameterization for the \(\alpha\)-parameter as given by Equations 36 through 38 is used throughout the infall phase. _Left_: Evolution of the total mass present in each of the molecular cloud, star and disk as a function of time. In this scenario, mass can’t accrete onto the star from the disk rapidly enough, so the disk keeps growing in mass until it even exceeds the mass present in the star. Even at the end of the simulation after 5 Myr, the star has only gained roughly 0.5 M\({}_{\odot}\) of mass. _Right_: Resulting value of the Toomre \(Q\) as a function of radius in the disk, shown for several intermediate time steps. As the disk grows in mass, \(Q\) drops below the safe value of 2 (indicated by the horizontal dashed line) everywhere except in the innermost part of the disk. This means that the self-gravity of gas in the disk can no longer be ignored, and the disk becomes susceptible to the gravitational instability.
Figure 2: Shakura-Sunyaev \(\alpha\)-parameter as a function of radius in the disk. Two regions of constant \(\alpha\) in the inner and outer disk are connected by a decreasing powerlaw between 1 and 10 AU. This profile applies to the post-infall phase only. The sharp Gaussian peak at 3 AU forms at 0.6 Myr, and is caused by the presence of Jupiter’s growing planetary core. This peak is not used for turbulent mixing of dust species, as it represents a torque exerted on the gas by the planet, and not a real increase in viscosity.
where
\[x=\frac{r-r_{J}}{R_{H}}, \tag{40}\]
with the Hill Radius \(R_{H}\) given by
\[R_{H}=r_{J}\left(\frac{M_{J}}{3M_{*}}\right)^{1/3}. \tag{41}\]
Here \(r_{J}\) is the location where the planet forms, assumed to be at 3.0 AU (or roughly 40% closer in than its current position at 5.2 AU from the Sun) and \(\tau\) is the growth time scale which sets what fraction of the available gas in the vicinity is accreted. Because the gas surface density \(\Sigma_{g}\) is a function of radius and time and is also modified by the presence of the planet itself, the precise value of \(\tau\) used depends on the choice of time step as well as \(r_{J}\). The value we used, \(\tau=1537.5\) yr, thus only works well for our chosen time step of 1000 years for the post-infall phase. No mass was added to the planet beyond \(M_{J}=317.8~{}M_{\oplus}\), which equals 1 Jupiter mass. For the chosen values for \(r_{J}\) and \(\tau\), this value was reached after roughly 4.5 Myr. The way a gap was opened in the disk by this growing protoplanet is by modifying the value for \(\alpha\) in its vicinity:
\[\alpha_{\rm new}=\alpha+(\alpha_{\rm peak}-\alpha)e^{-x^{2}}. \tag{42}\]
This adds a Gaussian spike to the \(\alpha\)-profile as described before, which can be seen in Figure 2. The value of \(\alpha_{\rm peak}\) was set to 0.01, in accordance with Desch et al (2018). This peak in \(\alpha\) acts as a torque by the planet, pushing material away and out of the gap. Because physically there is no real increase in the turbulent viscosity, the mixing of dust species should not be affected. Therefore the \(\alpha\)-profile used in the mixing calculation does not include this peak. The final parameter relevant to the planet gap is the formation time \(t_{\rm planet}\), which is also the time when the gap starts to open. As in the DKA18 model, this time was set to \(t_{\rm planet}=0.6\) Myr, though it must be noted that this time can't be directly compared between the two models. In the DKA18 model, \(t=0\) refers to the point where the disk is fully formed (the end of their non-existent infall phase) and the time at which CAI formation starts. In our model however, \(t=0\) corresponds to the very beginning of the infall phase. As we will see in the Results section, CAI formation already begins early in the infall phase, so well before the disk is finished building up. So while \(t_{\rm planet}\) has been chosen to (at least roughly) match the 0.6 Myr after the first CAIs start to appear, it cannot simultaneously match the 0.6 Myr after the end of the infall phase. Both the planet formation time \(t_{\rm planet}\) and the formation location \(r_{J}\) will be varied later to see how different choices for these parameters impact the results.
At the end of the 5 Myr simulation, the disk was simply left as is. No physics was included to simulate the eventual dissipation of the disk, as this would occur after the phenomenon of interest, CAI parent body formation, has already occurred. A summary of physical parameter choices made for the main model is shown as Table 2.
## 3 Results
### Main model
We can now move on to the results of our main model. The disk starts building up when the centrifugal radius \(r_{c}\) exceeds the inner edge of the disk \(r_{\rm in}\), which happens around 35 kyr after the start of the simulation. The infalling gas is then spread out over the disk according to Equation (32), the result of which is shown as Figure 3. The centrifugal radius itself can be seen in this plot as the location of the steep vertical line beyond which no material raains down on the disk at that time. As this radius increases over time, so does the total surface area of the disk within that radius. The value of \(\hat{\Sigma}\) at any one radius \(r<r_{c}\) must therefore decrease over time, in order for the total accretion rate \(\dot{M}\) to remain constant as per Equation (30). The infall phase ends when the molecular cloud core has been depleted of mass after roughly 0.4 Myr, at which point the centrifugal radius has reached \(r_{c}=93.9\) AU.
Figure 4 shows the resulting surface density evolution of the gas during the infall phase. What stands out in this plot is that at every time step, a significant amount of gas is present in the region beyond the centrifugal radius. This means that this material must have ended up there by viscous spreading. Figure 5 shows the radial velocity of the gas throughout the disk. The vertical dashed lines indicate the location of the centrifugal radius at each time. The radial velocity interior to \(r_{c}\) is strongly negative throughout the infall phase, since most of the gas infalling from the molecular cloud core is rapidly accreting onto the growing Sun. At the centrifugal radius however, the radial
\begin{table}
\begin{tabular}{l l l} \hline Parameter & Value & Description \\ \hline \(M_{c}\) & 1.05 \(M_{\odot}\) & Initial mass of the molecular cloud core \\ \(T_{c}\) & 14 K & Isothermal temperature of the molecular cloud core \\ \(\Omega_{c}\) & \(2.3*10^{-14}\) s\({}^{-1}\) & Solid-body rotation rate of the molecular cloud core \\ \(M_{\rm star,0}\) & \(1.05*10^{-4}M_{\odot}\) & Initial mass of the Sun \\ \(M_{0}\) & \(10^{-20}M_{\odot}\) & Mass of the initial Lynden-Bell \& Pringle disk model \\ \(R_{0}\) & 1 AU & Truncation radius of the initial Lynden-Bell \& Pringle disk model \\ \(r_{\rm in}\) & 0.06 AU & Inner edge of the disk \\ \(\sigma_{\rm in}\) & 10 g cm\({}^{-2}\) & Gas surface density at the disk inner boundary \\ \(\alpha_{\rm in}\) & 0.6 & Global value of the \(\alpha\)-parameter during the infall phase \\ \(t_{\rm planet}\) & 0.6 Myr & Time of formation of Jupiter’s core \\ \(a_{\rm planet}\) & 3 AU & Semi-major axis of Jupiter’s orbit at the formation time \\ \(m_{\rm planet,0}\) & 30 \(M_{\oplus}\) & Mass of Jupiter’s core at the formation time \\ \(\tau_{\rm planet}\) & 1537.5 yr & Growth time scale of Jupiter (for post-infall time steps of 1000 years) \\ \(\alpha_{\rm peak}\) & 0.01 & Value of \(\alpha\) at Jupiter’s location after its formation \\ \(\kappa_{d,\rm Ross}\) & 5 cm\({}^{2}\) g\({}^{-1}\) & Global value of the dust opacity \\ \hline \end{tabular}
\end{table}
Table 2: Overview of physical parameters chosen for the model. See Table 1 for the properties of the dust species. The values of \(\alpha\) in the post-infall phase are given in Equations (36) to (38).
velocity switches sign as the disk is expanding outward beyond this point.
At the end of the infall phase, the disk has reached a total mass of \(M_{\rm disk}=0.064\ M_{\odot}\). While this is comparable to the disk in the DKA18 model, which has \(M_{\rm disk}=0.089\ M_{\odot}\), this mass is spread out in a very different way. Our disk extends all the way out to 1000 AU, while the disk in the DKA18 model is much more compact, its gas surface density sharply decreasing past 10 AU. In turn, the surface density in the inner part of our disk is three orders of magnitude lower than in the DKA18 model. This has consequences for the accretion rate of the disk onto the Sun: while the disk in the DKA18 model loses about half its mass in just 0.1 Myr of subsequent evolution, Figure 6 shows that in our case, both the stellar and the disk mass barely change anymore after the end of the infall phase. This could simply mean that the chosen \(\alpha\)-parametrization is unrealistic, as the resulting viscosity is too weak to move significant quantities of mass back in towards the Sun.
Now that the disk has been fully formed, we can see how it evolves in the post-infall phase. Figure 7 shows the full time evolution of the gas surface density from the start of disk formation to the end of the simulation. The post-infall phase is represented here by the red lines. While the surface density is clearly decreasing over time in the innermost 10 AU of the disk, little change is visible beyond that point, where \(\alpha\) is lowest. An important feature of the post-infall phase is the planet gap that has opened up at \(r=3\) AU 0.6 Myr after the start of the simulation, a closeup of which can be seen as Figure 8. The gap can be seen to get wider over time, as Jupiter continues to accumulate mass, increasing its Hill radius. The surface density of gas within the gap is successfully reduced by 2 orders of magnitude. The mass evolution of Jupiter itself is shown as Figure 9. Its growth turns out to be fairly linear, with the growth time scale \(\tau\) chosen so that it reaches its full mass of \(M_{J}=317.8\ M_{\oplus}\) after about 4.5 Myr.
So far we have only looked at the gas component of the disk during the simulation, but what we're really interested in is the behaviour of the dust species during this time. Because this de
Figure 4: Viscous evolution of the gas surface density \(\Sigma_{s}\) during the infall phase. The disk starts building up when \(r_{c}>r_{\rm in}\). Rapid viscous expansion causes the gas to move all the way out to 1000 AU. The surface density keeps increasing everywhere during the entire infall phase as more and more gas is added to the disk.
Figure 5: Radial velocity of the gas \(v_{R}\) during the infall phase. At the centrifugal radius, indicated by the dashed vertical lines, \(v_{R}\) becomes strongly positive due to a large gradient in surface density pushing gas outward. Interior to this radius, the gas moves inward as it accretes onto the Sun. The sudden vertical jumps are likely numerical artefacts that are not expected to have a significant impact on the results.
Figure 3: Infall rate \(\dot{\Sigma}_{s}\) of gas from the molecular cloud core onto the disk. As time passes, the centrifugal radius \(r_{c}\) increases and infalling matter is added to greater and greater radii in the disk, while the total accretion rate \(\dot{M}\) remains constant. At the end of infall, the centrifugal radius has reached 93.9 AU.
Figure 6: Mass evolution of the molecular cloud core, the Sun and the disk. The infall rate of matter on the disk is constant over time. Most infalling material quickly accretes onto the Sun, which reaches 0.93 \(M_{\odot}\) at the end of the infall phase after 0.4 Myr. The solar and disk mass change little post-infall, as most of the disk mass is located at large radii where the viscosity is small.
pends heavily on where and when the CAI Factory is active, we'll first look at the midplane temperature, which is shown in Figure 10. The temperature in the inner disk shoots up to 1400 K soon after the disk starts building up, when viscous heating dominates the temperature calculation there. For the entire duration of the infall phase, there is then some region in the disk where the temperature reaches 1400 K, the CAI Factory. This region never extends past 1 AU, which means that CAIs can only end up past Jupiter's location of formation at \(r=3\) AU by transport within the disk, since they will not be created that far out. After the end of infall at 0.4 Myr, the temperature rapidly drops, and the CAI Factory switches off. During the post-infall phase, viscous heating is negligible compared to irradiative heating. An important result here is that the period during which CAIs are being created in the disk basically coincides with the infall phase. CAI formation should therefore have ceased by the time the disk is fully formed, unlike in the DKA18 model, where this is instead the time when the CAI Factory is first turned on. This earlier formation of CAIs significantly affects the evolution of their surface density.
Figures 11, 12, 13, 14, and 15 show the surface density evolution for the five different dust species in the model. As Figure 11 shows, population 1 dust is completely absent in the innermost part of the disk during the infall phase (except for the very first line when the temperature has not quite reached 1400 K) as that is where it is being converted into populations 3, 4, and 5. An important thing to note is that, just as with the gas, each of the different populations is present much further out in the disk than the centrifugal radius. Evidently the dust particles are being dragged along with the gas during the early rapid viscous expansion phase as well as experiencing strong mixing. This is true even for populations 3, 4, and 5 (CAIs), which originate exclusively from the CAI Factory in the innermost part of the disk. At the end of the infall phase then, each dust population can be found all the way out to 1000 AU. From this point on, the dust particles start to drift back in towards the Sun. Because populations 1, 2, and 3 are micron-sized particles that are well coupled to the gas, their radial velocity is essentially equal to that of the gas throughout the disk. Similarly to the gas in Figure 7 then, these dust species slowly drain from the inner disk, but little change in their surface density can be seen beyond 10 AU.
Figure 8: Close-up of Figure 7 around the location of the planet gap. Surface density builds up here until the end of the infall phase, after which it starts dropping again. As Jupiter grows to its full size by accreting mass from its surroundings, its Hill radius increases, widening the gap.
Figure 10: Midplane temperature due to irradiation and viscous heating. During the infall phase, the temperature reaches 1400 K in the inner disk, activating the CAI Factory. At the end of infall, the temperature quickly decreases again.
Figure 7: Viscous evolution of the gas surface density \(\Sigma_{g}\) during the full simulation. After the infall phase, \(\Sigma_{g}\) keeps dropping in the inner disk where matter is accreting onto the Sun. In the outer disk, the viscosity is lower and there is less observable change. The planet gap can be clearly be seen at 3 AU.
Figure 9: Growth of Jupiter from a 30 \(M_{\oplus}\) core at 0.6 Myr to its full size of 317.8 \(M_{\oplus}\) (1 Jupiter mass) after 4.5 Myr by accretion of gas in its vicinity. The growth turns out to be roughly linear.
Figure 11: Surface density evolution of population 1 dust. During most of the infall phase, there is no population 1 dust in the inner disk due to its complete thermal conversion into other dust species.
Figure 16: Radial velocity of the dust species in the post-infall phase. The micrometer-sized particles, populations 1-3, are well coupled to the gas and experience very little radial drift in the outer disk. They are not caught in the pressure bump, but simply drift through the gap into the inner disk. The larger populations 4 and 5 have higher radial drift velocities, but cannot easily pass the planet gap.
Figure 14: Surface density evolution of population 4 dust. These grains are large enough to display noticeable inward radial drift in the outer disk as well as a trapping effect in Jupiter’s pressure bump. The 35000 year line is not visible, since the temperature is still below 1400 K.
Figure 13: Surface density evolution of population 3 dust. The 35000 year line is not visible, since the temperature is still below 1400 K.
Figure 15: Surface density evolution of population 5 dust (CAIs). As all other dust species, the CAIs are efficiently transported into the outer disk during the infall phase. They then start drifting back in and piling up in the region behind the planet gap. CAIs in the inner disk slowly drain as they accrete onto the Sun. The 35000 year line is not visible, since the temperature is still below 1400 K.
No trapping effect can be seen for these populations behind the planet gap. The sign of the pressure-gradient force reverses in the outer gap, where the gas pressure increases in the outward direction, which increases the inward force on the gas and causes it to orbit with super-Keplerian velocities. The resulting drag force on solid particles causes them to drift towards the pressure maximum, away from the star. However, smaller particles such as populations 1 through 3 are coupled to the gas strongly enough to be dragged along with it as it flows through the gap, while only larger particle sizes with radial velocities exceeding that of the gas are hindered by this barrier. This process is called dust filtration by Rice et al (2006). In Figure 16, which shows the effective radial velocity of the different dust species in the post-infall phase, it is shown that dust particles belonging to populations 1, 2, and 3, when approaching the planet gap from larger radii, are simply accelerated through. The picture is different for the much larger AOAs (population 4) and CAIs (population 5). These populations have higher Stokes numbers than small dust grains, and as such they drift more strongly towards higher gas pressures, leading to larger radial drift velocities. In the far outer disk, the radius at which the surface density of these dust species starts to drop off steeply can be seen to move inward over time. Because the planet gap does serve as an effective barrier against particles of these sizes, they are piling up in the region just behind the planet gap, as evidenced by the surface density increasing over time there. Similar to the other dust populations, the CAIs and AOAs are slowly depleting in the inner disk, where no barrier exists to prevent them from drifting into the Sun. However, the average radial velocity of CAIs in the inner disk implies that all CAIs would vanish in this region on a time scale of order \(10^{5}\) years after their formation ceased. Since this is clearly not what is happening, with a lower but still significant amount of surface density left even after 5 Myr, at least some part of the CAIs must still be leaking through the planet gap into the inner disk.
Figure 17 shows the CAI abundance as a mass fraction of all the dust species in the disk. This represents the main result of this project. Comparing first the final abundance profile after 5 Myr to the result from the DKA18 simulation (their Figure 8), we see that they are roughly similarly shaped, with a large abundance peak just beyond Jupiter's location and little to no CAI abundance elsewhere. There are however also some important differences. First of all, the peak in our model is much broader, extending over almost 100 AU, while the peak in the DKA18 model is not even one full AU wide. This can at least be partially explained by the CAIs having been transported outward so far that they are simply still in the process of drifting back in. Given more time (or equivalently, given a higher accretion rate in the outer disk), the CAIs would presumably continue to drift back in, leading to a narrower and taller abundance peak. Second, the overall abundance in our peak is significantly higher than in the DKA18 model. However, there are other uncertainties affecting the precise abundance values. One of these is parameter choice, in particular the molecular cloud parameters. We will explore the consequences of different parameter choices in section 3.2. For now, suffice it to say that the general shape of the abundance profile is more reliable than the precise quantitative values.
While the final abundance profile after 5 Myr looks broadly similar to that in the DKA18 model, the intermediate time steps do not. At \(t=50\) kyr, the first line in Figure 17 with a non-zero abundance, we see that the CAI abundance has a virtually constant value of 2.4% out to 50-60 AU. Since this is the value obtained inside the CAI Factory when all population 1 dust is thermally processed, it means that outward mixing is very rapid and efficient. At subsequent time steps, we see that the CAIs are mixed further outward, until infall ends after \(t=0.4\) Myr. At this point the abundance is rather low throughout the entire disk, but inward radial drift of CAIs now commences. Something interesting then happens that does not occur in the model without infall: apart from the abundance peak that starts to build up when the planet gap opens, a second peak forms in the far outer disk. As CAIs drift in, they start slowing down (Figure 16) when the gas density increases and their Stokes number (Figure 18) drops, causing a pile-up. This second peak slowly drifts inward over time and eventually merges with the first peak, leading to the final abundance profile with a single broad peak.
a parameter search was conducted in which the simulation was run a number of times, varying one physical parameter at a time over a number of possible values. Not every possible parameter was varied in this way. For example, while the molecular cloud temperature \(T_{c}\) and rotation rate \(\Omega_{\mathrm{z}}\) are not known a priori, the cloud mass \(M_{c}\) should roughly be the same as the total mass of the Solar System, so it makes little sense to see what happens for completely different cloud masses.
We first show what happens when varying some of the parameters associated with the disk itself. Figure 19 shows the CAI abundance profile resulting from variation of the disk inner edge \(r_{\mathrm{in}}\) between 0.05 and 0.1 AU. Moving the inner edge further away from the Sun leads to slightly higher abundances in the CAI peak, but this effect is so small that we can say the overall abundance is essentially independent from the inner edge.
Figure 20 shows the result of variations of \(\sigma_{\mathrm{in}}\), the gas surface density at the inner edge, which is used as a boundary condition for Equation (13). The explanation here can be brief: while this parameter influences the surface densities in the inner disk, the dust species are all affected equally, so the abundance profile does not change.
Figure 21 shows the result for variations of the opacity \(\kappa\) in the disk. When the opacity increases, it becomes harder for heat to escape from the disk, leading to an increase in the temperature, at least in the part of the disk where the temperature wasn't already at the maximum capped value. In practice then, this means that the CAI Factory grows in radial extent. More population 1 dust is then converted into CAIs, leading to a higher peak abundance for higher values of \(\kappa\).
The search over different values of the initial \(\alpha\)-parameter during the infall phase is presented as Figure 22. A trend is visible where the peak abundance increases for higher values of this parameter. Since this \(\alpha\) is used to mimic the effects of gravitational instability, higher values imply stronger redistribution of mass in the disk. This might transport additional CAIs outward, increasing abundances in the outer disk. What is less clear is why the abundance peak is broader for lower values of \(\alpha\). Perhaps the lower efficiency of mass redistribution slows down the inward radial drift of CAIs in the outer disk because the gas surface density ends up lower, reducing drag forces on the dust. It must be noted that for the lowest values of \(\alpha\), the disk technically becomes too massive to ignore the effects of self-gravity, even though these values do follow the general trend of lower but
Figure 21: Abundances after 5 Myr for different values of the disk opacity \(\kappa\). Higher opacities trap more heat in the disk, increasing the temperature and thereby enlarging the radial extent of the CAI Factory. This creates more CAIs and increases their abundance in the pressure bump.
Figure 22: Abundances after 5 Myr for different values of the \(\alpha\)-parameter during the infall phase. Higher values increase the peak abundance, likely due to more efficient outward mass redistribution.
Figure 19: Abundances after 5 Myr for different values of the disk inner edge \(r_{\mathrm{in}}\). Higher values of \(r_{\mathrm{in}}\) slightly increase the peak CAI abundance, but this effect is so small that the results are basically independent of the inner edge location.
Figure 20: Abundances after 5 Myr for different values of the surface density at the disk inner edge \(\sigma_{\mathrm{in}}\). This parameter seems to have no impact on the CAI abundance, as it affects dust species equally.
broader abundance peaks in this parameter search. For \(\alpha=0.1\) or 0.2, the disk mass exceeds 0.15 M\({}_{\odot}\). This might also have an impact on the results.
Moving on to the molecular cloud parameters, Figure 23 shows the dependence of the CAI abundance on variation of the molecular cloud rotation rate \(\Omega_{c}\), which is also a measure of the total angular momentum in the cloud. Here we see that the CAI abundance peak past the planet gap grows both wider and taller for increasing angular momentum. The first of these effects is easy to understand. If the angular momentum in the cloud is low, the centrifugal radius in Equation (31) is also low, and infalling material is deposited on the disk relatively close to the star. In contrast, high cloud angular momentum causes infalling material to be deposited further out. A secondary effect of this is that, because it takes longer for the centrifugal radius to reach the inner edge of the disk when \(\Omega_{c}\) is low, more of the cloud mass will accrete directly onto the Sun instead of on the disk, leading to a less massive disk. The second effect, higher peak abundances for higher \(\Omega_{c}\), is more surprising, because in general, rapidly rotating molecular clouds produce more massive disks with lower crystallinity than slower rotating clouds (Dullemond et al. 2006). We would expect that first, if more infalling population 1 dust ends up at larger radii from the Sun, less of it ends up in the CAI Factory where it can be used to produce CAIs. The total amount of CAIs in the disk will then also be lower. Second, population 1 dust infalling at larger radii will dilute the CAI abundance at those locations. These two effects would lead to lower abundances behind the planet gap for larger values of \(\Omega_{c}\) instead of higher. However, it seems these effects are overshadowed by another: for higher \(\Omega_{c}\), surface densities in the disk will be higher further out, leading to more viscous heating and higher temperatures there as well. This increases the radial extent of the CAI Factory, which produces CAIs efficiently enough that the net effect is an increase in abundance.
Figure 24 shows the parameter search over the molecular cloud temperature \(T_{c}\), which seems to have a larger effect on the peak CAI abundance than most other parameters searched over. Here we must first note that varying the cloud temperature alone would have an undesirable by-effect. When the cloud temperature increases, so does the local sound speed within the
Figure 23: Abundances after 5 Myr for different values of the cloud rotation rate \(\Omega_{c}\). The abundance peak grows taller and wider for increasing values of \(\Omega_{c}\), as more material ends up in the disk instead of directly accreting onto the Sun, also increasing the efficiency of the CAI Factory.
Figure 26: Abundances after 5 Myr for different values of the planet formation location \(a_{plan}\). As the planet moves outward, its Hill radius increases, widening the gap and making the trapping slightly more efficient.
Figure 24: Abundances after 5 Myr for different values of the molecular cloud temperature \(T_{c}\). Higher temperatures decrease the duration of the infall phase. While this causes higher surface densities in the inner disk early on, increasing the efficiency of the CAI Factory, it also causes stronger outward transport of CAIs, spreading them further out over the disk.
Figure 25: Abundances after 5 Myr for different values of the planet formation time \(t_{plan}\). Perhaps surprisingly, this parameter has little effect on the final abundance profile. CAIs are mixed so far out that they are still in the process of drifting back in even after 5 Myr.
cloud. This means that the outward travelling expansion wave that causes the cloud to collapse moves faster, so the entire infall phase takes place in a shorter time span. However, the radius of the cloud also depends on the sound speed through
\[R_{c}=\frac{GM_{c}}{2c_{s}^{2}}. \tag{43}\]
Therefore, increasing the temperature of the molecular cloud will cause it to contract. However, because the rotation rate \(\Omega_{c}\) is kept constant, this removes angular momentum from the cloud. We therefore varied both the cloud temperature and rotation rate at the same time, to keep the total angular momentum constant. We see that higher cloud temperatures then lead to a lower but broader CAI abundance peak. At least early on, the more rapid infall should lead to higher surface densities in the inner disk, making CAI production more efficient. However, the higher surface densities also lead to stronger outward transport, and this effect seems to dominate here. In the simulations with higher molecular cloud temperatures, the CAIs have been spread out more, leading to a lower but broader abundance peak which would presumably become taller and narrower given more time for continued inward radial drift.
Moving on to the planet parameters, there are two important quantities we have varied: the planet formation time \(t_{\rm planet}\), shown in Figure 25, and the planet formation location \(a_{\rm planet}\), shown in Figure 26. Interestingly, while changing the location of formation obviously also moves the location of the planet gap and the pressure maximum beyond it, the final abundance profile otherwise does not change very strongly with these parameters. If the effects of infall would not be considered, this would be rather surprising. In the DKA18 model, CAIs are transported outward only by means of turbulent diffusion and meridional flow. Their surface density rapidly drops beyond \(r=3\) AU, which means that much less CAIs would be present beyond the location of the planet gap if Jupiter formed further out in the disk, leading to a lower peak abundance. Likewise, because CAI formation has ceased by the time the planet forms, later formation times would mean that more CAIs would have drifted back in towards the Sun, possibly already having accreted onto it. This again leaves less CAIs far enough out to be trapped in the pressure maximum. To achieve the CAI abundance profile from the DKA18 model then, it is essential that Jupiter does not form too late or too far from the Sun. In our model however, the outward transport of CAIs during the infall phase is so efficient that CAIs will remain present in the outer disk, drifting in, even after several million years. The precise time of formation (even when infall is still ongoing at 0.2 Myr) or location then make a relatively small difference to the final abundances.
While by no means an exhaustive search over all the different possibilities, we can also attempt to run the model with different parameterizations of the post-infall \(\alpha\). The result of this is shown as Figure 27. For reference, the default \(\alpha\)-profile as given by Equations (36) through (38) is shown as \(\alpha\)-model 0. In \(\alpha\)-model 1, the viscosity was raised by a factor 10 in the inner disk (\(r<1\) AU), with the powerlaw part between 1 and 10 AU adjusted to correctly connect the two regions of constant \(\alpha\). The primary effect of this model seems to be to increase CAI abundances in the inner disk, where stronger mixing likely counteracts the effects of radial drift inward. In \(\alpha\)-model 2, the viscosity was increased by a factor 10 in the outer region (\(r>10\) AU) instead. This seems to decrease the effectiveness of the particle trap in the pressure bump, as the stronger turbulent mixing behind the planet gap has an easier time propelling CAIs through the gap into the inner disk, where the abundance is now comparable to that in the pressure bump. This is similar to the situation in \(\alpha\)-model 3, where the entire \(\alpha\)-profile was increased by a factor 10. In \(\alpha\)-model 4, a constant value of \(\alpha=10^{-4}\) was used throughout the disk, while \(\alpha\)-model 5 represents the case with \(\alpha=10^{-3}\). Both of these cases lead to a situation where the CAI abundance interior to the planet gap is not much different from that exterior to it. None of these models therefore really represents an improvement over the \(\alpha\)-profile used for our main model, which most accurately reproduces the observations of high CAI abundances in carbonaceous chondrites and low abundances in meteorites originating from the inner disk.
Our main model only takes CAIs with a grain size of 2500 \(\mu\)m into account. While most of the mass in CAIs is contained in such large grains, they are found almost entirely in one group of carbonaceous chondrites (CV), while the most widely occurring type of CAI consists of much smaller grains (Cuzzi et al. 2003). A final thing we can try then, is to see how our model impacts CAIs of different sizes. This is shown in Figure 28. CAIs up to 1000 \(\mu\)m remain spread out over the outer disk quite evenly, while larger CAI grains, which have higher drift velocities, pile
Figure 28: Abundances after 5 Myr for different values of the CAI grain size \(a_{\rm grain}\). The larger the grain size, the faster the CAIs drift back in to pile up in the pressure bump. Smaller grains are more evenly spread throughout the disk.
Figure 27: Abundances after 5 Myr for different values of the post-infall \(\alpha\)-profile. See the text for the meaning of the different \(\alpha\)-models.
up at the location of the pressure bump. This effect gets stronger for larger grain sizes. Smaller CAI grains also maintain a (low but noticeable) presence in the inner disk, reproducing the observation that smaller grains occur in more meteorite types, also those originating in the inner disk.
An important conclusion we can draw from the parameter searches we have shown is that our model is not very sensitive to parameter choices. For a particular grain size, many of the parameters we have varied only have a limited effect on the CAI abundances. The molecular cloud parameters, \(\Omega_{\rm c}\) and \(T_{\rm c}\), appear to have the largest quantitative impact on the results. But importantly, the effect of particle trapping in the pressure bump is not disrupted by changes in any of the parameters we varied, possibly with the exception of different \(\alpha\)-parameterizations. So while quantitative uncertainties are introduced into the CAI abundances we find due to uncertainties in the parameter choices, at least qualitatively the result that the Jupiter solution keeps CAIs trapped in the pressure bump seems to be quite general.
### A second planet
The efficient outward transport of CAIs during the infall phase, and subsequent inward radial drift, raises the question how they would be affected by the presence of multiple planet gaps in the disk. This question never occurred in the case of the DKA18 model, since CAIs were just barely diffusing past the planet gap from the inner disk, instead of approaching it from the far outer disk. If CAIs become trapped in a pressure bump caused by one of the other giant planets in our Solar System, this might prevent the build-up of a significant CAI abundance near Jupiter, which is where the carbonaceous chondrites are thought to have formed.
To answer this question, the model was extended with a second planet gap caused by the formation and growth of Saturn. We assume that Saturn has the same mass as Jupiter (\(M=30M_{\oplus}\)) when it starts opening a gap, and grows to its full size in the same amount of time, meaning it reaches a mass of \(M=95.2~{}M_{\oplus}\), or one Saturn mass, after 4.5 Myr. This leaves three important parameters for which we must choose some value: the formation time \(t_{\rm planet}\), the formation location \(a_{\rm planet}\), and the depth of the planet gap, represented by \(\alpha_{\rm peak}\) in Equation (42). Concerning the planet formation time, we assume that Saturn started forming around the same time as Jupiter (\(t_{\rm planet}=0.6\) Myr). We initially place Saturn at its present-day location at \(a_{\rm planet}=9.6\) AU, although it would make sense to place its birthplace closer to the Sun, since Jupiter also formed a \(40\%\) closer in in our model. The largest uncertainty lies in the value of \(\alpha_{\rm peak}\). As Saturn is less massive than Jupiter, it makes sense that its planet gap would not be as deep and its potential for trapping dust species in a pressure maximum less strong. This places an upper limit of \(10^{-2}\) on \(\alpha\). A lower limit of \(10^{-5}\) is set by the viscosity in the vicinity of Saturn, as this is the lowest value of \(\alpha\) in the surrounding disk, and any lower values would actually produce an overdensity instead of a gap. We rather arbitrarily assume a value of \(\alpha_{\rm peak}=10^{-4}\) to start with.
Figure 29 shows the evolution of the CAI surface density resulting from this model. Saturn's planet gap can be seen to the right of that due to Jupiter. While this gap is clearly less deep, a significant build-up of CAI surface density can still be seen behind it. However, this does not seem to prevent a similar build-up of CAIs in the region in between the two planets. This is reflected in Figure 30 showing the CAI abundance profile. Up until the formation time of both planets at 0.6 Myr, this result is identical to the one-planet case of Figure 17. The inward drift of the second abundance peak where CAIs are piling up is also the same in this case, since this occurs independently from whether a planet is present or not. But in the meantime, the CAI abundance is rising faster at the location of Saturn's pressure bump than at Jupiter's. Saturn is stopping the inward drift of a significant fraction of the CAIs, but a large enough amount is still leaking through the gap to ensure that the final CAI abundance behind Jupiter is a significant fraction of what it is in the one-planet case.
As with the main model, we can investigate how the results depend on the parameter choices. Figure 31 shows how the final abundances after 5 Myr depend on Saturn's formation time \(t_{\rm planet}\), while Jupiter's formation time is kept fixed at 0.6 Myr. Perhaps unsurprisingly, because it matches what happens in the one-planet case, the formation time has little effect on the end result. Sufficient CAIs remain in the disk, drifting inward, and it takes several million years for the second peak to drift all the way in, whether it be to 3 AU or 9.6 AU. Though this effect is not very strong, the CAI abundance in between the two planets does increase for later formation times for Saturn, as more CAIs can drift past its location in the meantime.
Figure 30: Evolution of the CAI abundance in the two-planet case. The largest abundance peak has shifted from Jupiter to Saturn, but enough CAIs leak through that the abundance behind Jupiter is still half of what it is in the one-planet case.
Figure 29: CAI surface density when Saturn is included into the model. While CAIs are piling up in Saturn’s pressure maximum around 10 AU, some part of the CAIs is still leaking through this gap, as the surface density also keeps increasing in the region in between the two planets.
Figure 32 shows how the results depend on Saturn's formation location \(a_{\rm planet}\), while keeping Jupiter fixed at 3 AU. Unlike in the one-planet scenario, this parameter now does have a rather large effect on the final abundance profile. Since the width of the planet gap depends on Saturn's Hill radius, which is proportional to its heliocentric distance, it shrinks as Saturn moves closer in. We have already seen that Saturn's pressure bump is less effective at keeping CAIs in place than Jupiter's is, and moving the formation location closer to the Sun exacerbates this issue. The closer in Saturn forms, the higher the CAI abundance in Jupiter's pressure bump becomes. The blue line for \(a_{\rm planet}=5.6\) AU places Saturn \(\pm\) 40% closer in than where it is located today, similar to Jupiter. The result here greatly resembles the one-planet case with only a small dip at the location of the second planet gap.
Finally, and quite as expected, Figure 33 shows that increasing the value of \(\alpha_{\rm peak}\), which causes a deeper gap and thus a stronger pressure gradient in the gas, which pushes CAIs back out towards the pressure maximum, will lead to higher CAI abundances in Saturn's own pressure bump, while letting through less CAIs in the direction of Jupiter. The difference in abundance at Jupiter's pressure bump seems particularly large between the cases with \(\alpha_{\rm peak}=1\times 10^{-4}\) and \(2\times 10^{-4}\), suggesting that perhaps there is a transition point around these values above which Saturn's CAI trapping capability changes from ineffective to effective.
Without more precise knowledge about the location where Saturn originally formed or what value of \(\alpha_{\rm peak}\) best represents how well it can push CAIs back towards its pressure bump, it is difficult to say what exactly the effect of a second planet on the CAI abundances in the Solar System would be. However, as we have demonstrated, the presence of multiple planet gaps in the solar protoplanetary disk would at least not necessarily be an impediment to obtaining the results of our main model as shown in Figure 17. The inclusion of the infall phase from a collapsing molecular cloud into the model by Desch et al (2018) therefore does not transport CAIs out too far for the Jupiter solution to still work as an explanation for the relatively high CAI content of carbonaceous chondrites.
### A gravitationally stable model
Up until now, we have, through the choice of values for the molecular cloud rotation rate and temperature, assumed that so much mass is deposited onto the protoplanetary disk that it becomes gravitationally unstable. It is however possible that the Sun formed in a region of high-mass star formation, where temperatures tend to be higher and strong magnetic fields can slow down the cloud rotation rate through magnetic braking. This leads to clouds with lower angular momentum, and therefore also smaller centrifugal radii during disk formation. In such a situation it is possible that the disk never becomes gravitationally unstable. To see how this affects disk (and CAI) evolution, we ran a model with lower \(\Omega_{\rm c}\), keeping our other disk param
Figure 31: Final abundances when varying Saturn’s formation time. Later formation allows additional CAIs to drift towards Jupiter.
Figure 33: Final CAI abundance profile after 5 Myr with variations of \(\alpha_{\rm peak}\), which controls the depth of the planet gap. Higher values increase the gradient in the gas surface density and hence lead to a stronger pressure gradient pushing CAIs back towards the pressure bump. This reduces the amount of CAIs passing through and thus lowers the CAI abundance in the region in between the two planets. A rapid transition seems to occur between \(a_{\rm peak}=1*10^{-4}\) and \(2*10^{-4}\).
Figure 32: Final CAI abundance profile after 5 Myr with variations of Saturn’s location of formation \(a_{\rm planet}\). The planet’s Hill radius shrinks when it moves in towards smaller heliocentric distances, reducing the width of the resulting planet gap. It then becomes easier for turbulent motions in the gas to propel CAIs through the gap into the region between Jupiter and Saturn. If Saturn formed at 5.6 AU (at least for the fixed value of \(\alpha_{\rm peak}=10^{-4}\)), it fails in trapping many CAIs at all, and the result greatly resembles the one-planet case.
eters the same as before. We found that \(Q_{\rm Toomre}\) exceeds 2 in at least some part of the disk for rotation rates \(\Omega_{\rm c}\) of at least \(2\times 10^{-15}\) rad s\({}^{-1}\), and thus picked a value of \(\Omega_{\rm c}=1\times 10^{-15}\) rad s\({}^{-1}\) for this new simulation. Since no artificially increased \(\alpha\)-parameter is required in this model, we use the parameterization from Equations (36-38) from the start.
Figure 34 shows the resulting time evolution of the gas in this disk model. Comparing this to Figure 7, there are several notable differences. First of all, the disk now only starts forming after 0.28 Myr, as the centrifugal radius grows more slowly and requires more time to even reach the inner disk edge at 0.06 AU. Because the infall phase still ends after 0.4 Myr, disk formation now occurs in a shorter time span. Secondly, the radial extent of the disk is much smaller. At the end of infall, the centrifugal radius is no larger than 0.18 AU, and without the gravitational instability pushing material outward, a sharp drop in the surface density can be observed near \(r=10\) AU. Although this disk ends up with less total mass than in the gravitationally unstable case, this mass is much more concentrated in the inner disk. Therefore the third major difference is that surface densities in the inner disk are now 1-2 orders of magnitude higher. Since this increases the viscous heating rate, the radial extent of the CAI Factory has also increased, though only slightly, to 1 AU.
Figure 35 shows the evolution of the CAI abundance profile in this model. The denser gas slows down inward radial drift of CAIs, and they remain spread out fairly evenly between the pressure bump and \(r=10\) AU, beyond which virtually no CAIs are present. The peak abundance value of nearly 3% is naturally lower in this result than in the gravitationally unstable case, and in line with observed metoritic abundances of a couple percent. In order to achieve a result where the inner disk drains of CAIs however, we had to increase the planet gap depth by increasing the value of \(\alpha_{\rm peak}\) in Equation (42) from 0.01 to 0.1. Without this adjustment, there is sufficient leakage of CAIs through the planet gap that the abundance on both sides remains almost the same.
As Figure 36 shows, there is only a narrow range of values for \(\Omega_{\rm c}\) for which the final abundance profile resembles that in Figure 35. Increasing the rotation rate by a factor two leads to a situation in which the gravitational instability kicks in again, while the trapping effect rapidly becomes ineffective for lower values. When \(\Omega_{\rm c}=6\times 10^{-16}\) rad s\({}^{-1}\), the centrifugal radius barely exceeds the disk inner edge anymore, and very few CAIs are left in the disk to be trapped. We did not explore the effects of moving the disk inner edge closer to the Sun.
The final result we'll show for our gravitationally stable model is a parameter search over the planet formation location \(a_{\rm planet}\). The smaller radial extent of this disk increases the sig
Figure 34: Viscous evolution of the gas surface density \(\Sigma_{g}\) for a model with a reduced cloud rotation rate and no gravitational instability.
Figure 35: Time evolution of the CAI abundance throughout the disk for a model with a reduced cloud rotation rate and no gravitational instability.
Figure 36: Abundances after 5 Myr for different values of the cloud rotation rate \(\Omega_{\rm c}\). There is only a narrow range of values for which the abundance profile agrees with metoritic observations without triggering the gravitational instability.
Figure 37: Abundances after 5 Myr for different values of the planet formation location \(a_{\rm planet}\). For formation locations at greater radial distances than 3 AU, the CAI abundance behind the planet gap rapidly decreases, while it increases in the inner disk.
nificance of this parameter considerably. As Figure 37 shows, moving Jupiter's formation location out any further than 3 AU will rapidly decrease the CAI abundance in the pressure bump, and increase the abundance in the inner disk, as not enough CAIs get trapped behind its orbit.
## 4 Discussion
The main science question we set out to answer was whether the DKA18 solution to the CAI storage problem still works when the effects of the infall phase from a parent molecular cloud core are included into the model. In a nutshell, we can answer this question positively. But while Jupiter's planet gap does serve as an effective barrier preventing CAIs from drifting into the inner solar system and eventually vanishing into the Sun, there are also some important differences between the models.
### Main model
The main focus of our work has been on a model in which the solar protoplanetary disk becomes gravitationally unstable during the infall phase, as we found that this situation arises for a wide range of input parameters. The first new result that we found in this model was that CAIs are only created during the infall phase when the disk is still building up. During this phase, the viscosity in the disk is so strong that viscous heating alone will cause an extended region in the inner disk to reach a temperature of 1400 K. This has a very important consequence: the rapid viscous expansion of the gas during the infall phase drags the various dust species along with it, spreading them out much further than would be achieved by turbulent diffusion or meridional flow in an already established disk would. In the DKA18 model, there are virtually no CAIs present at a heliocentric distance of 10 AU, while we find them to exist even a hundred times further out than that.
The fact that CAIs are transported out so far into the outer disk leads to another important result. The required time for CAIs to drift all the way back in towards the Sun is now several million years, instead of the several tens of thousands of years it would take them to vanish from the disk when they are only found in the innermost part. The presence of CAIs in the disk at the time when meteorite parent bodies formed is thus a natural consequence of the viscous spreading of the infall phase. This solves the second part of the CAI storage problem without needing to invoke a planet.
We briefly considered the possibility that the first part of the CAI storage problem might also be solved without Jupiter trapping any dust particles, due to the pile-up of CAIs creating a distinct peak in the outer disk that slowly drifts in. This second peak forms regardless of whether a planet exists in the model or not. However, without a planet in the model to keep CAIs in place, the inner disk will continuously be seeded with new dust grains drifting in, and the CAI abundance remains very high in the inner disk. So while Jupiter is not needed to explain the presence of CAIs beyond its orbit, it is still required in order to let the inner disk drain successfully.
Another important difference between our model and the DKA18 model is the width of the abundance peak at the end of the simulation. In the DKA18 model, this peak extends to roughly 4 AU. In our model, it extends all the way out to 100 AU, although it must be noted that this is not entirely because of Jupiter's pressure bump. The inward radial drift of CAIs is a still ongoing process after 5 Myr, that would presumably continue if the disk itself would not dissipate yet.
This model therefore predicts that CAIs should also be present in objects originating from far beyond the orbit of Jupiter, such as for example in Kuiper Belt objects.
A final important difference we found is that in our model, the CAI abundance is significantly higher in the particle trap than in both the DKA18 model and real meteorites. While this seems problematic, the overall CAI abundance could be lowered by different parameter choices, most notably for the molecular cloud parameters, or by perhaps considering more realistic size distributions of CAIs.
There are also some notable similarities between our model and that of Morbidelli et al (2022). Their model also begins with infall from a molecular cloud core. The early, high-viscosity disk undergoes rapid viscous expansion as the radial velocity of gas is positive beyond the centrifugal radius, efficiently transporting dust outward. As in our case, the disk then evolves into a low-viscosity accretion disk as infall ends. Their model then manages to predict the contemporaneous formation through the streaming instability of two isotopically distinct planetesimal groups, one at the snowline around 5 AU and another at the silicate sublimation line around 1 AU. These groups correspond to the parent bodies of CC and NC meteorites, respectively. The difference in composition is caused by a change over time in the composition of the molecular cloud core (which we did not include in our model). Planetesimals at the snowline incorporate mostly material that accreted onto the disk early on, transported outward during the expansion phase, while later infalling material changed the overall composition of the dust in the inner disk. They note, but did not model, that a barrier against dust drift, likely in the form of Jupiter's formation, is still required to prevent the disk from homogenizing before later planetesimal formation is complete. CAIs in their model are assumed to have condensated from early infalling material.
### Gravitationally stable model
An argument against CAIs spreading out into the (far) outer disk early on is the observed lack of CAIs in CI chondrites. Only one CAI has been found in a CI chondrite (Frank et al., 2011). CI chondrite parent bodies are thought to have formed after 3-4 Myr at heliocentric distances \(r\geq 15\) AU (Desch et al., 2018), implying that CAIs had not reached these distances in significant quantities yet at this time. This observational constraint can be satisfied by not triggering the gravitational instability, which requires a molecular cloud with less angular momentum than in our main model. This can be achieved by considering models with lower cloud rotation rates and/or higher temperatures. These conditions can be achieved in high-mass star formation regions, where cloud cores with strong magnetic fields can slow down their rotation through magnetic braking (Wurster & Li, 2018). There is evidence that the Sun may have formed in such an environment (Hester & Desch, 2005). By using a smaller cloud rotation rate, we were able to produce a gravitationally stable model in which CAIs do remain trapped behind the planet gap, but don't extend out as far as 15 AU.7 However, while this matches the observation that (almost) no CAIs are found in CI chondrites, it does not explain why they are found in comets that formed in the same region but at later times. Our current models only predict that CAIs were transported to this region early (with gravitational instability) or not at all (without it).
Footnote 7: We did not explore different combinations of the cloud rotation rate and temperature that might yield similar results.
### Suggested improvements
There are several ways in which the model could be further improved. It might also be good to check the results of the model with a full hydrodynamic code.
The midplane temperature calculation we made use of could be improved upon. The model used for the stellar luminosity (Baraffe et al. 2002) is not really reliable for the earliest phases of the simulation, where an accurate temperature calculation is arguably the most important. We also assumed a constant opacity throughout the disk, instead of calculating it from the available dust components. A more precise temperature model might influence when and where exactly the CAI Factory is active.
We have seen that, at least for the \(\alpha\)-parameterization we employed, the disk in our main model becomes gravitationally unstable during the infall phase. Because this can lead to the emergence of overdense clumps and spiral arms which can transport angular momentum outward, these effects can (at least crudely) be mimicked by artificially increasing the \(\alpha\)-parameter for the viscosity. A full treatment of the gravitational instability would require a code that can handle non-axially symmetric disk models however.
The way in which the planet gap is introduced into the model is also quite crude. While the width of the gap grows with the planet's mass, its depth does not. This is an important parameter however, because it determines how well the particle trapping effect works. A more sophisticated model could be used for the opening of the gap. Also not taken into account is the possibility that the gap itself moves over time due to migration of Jupiter.
The disk mass in our main model evolves very little after the end of the infall phase, when most of the mass resides at large heliocentric distances, but the viscosity is too weak to move much of it back in to accrete onto the Sun. This is probably not very realistic. The model of Yang and Ciesla (2012) suffered from the same issue. Perhaps an \(\alpha\)-profile could be set up in such a way that a stronger accretion rate emerges, or different mechanisms of angular momentum transport could be included, such as magnetized winds. The dissipation phase of the disk due to photoevaporation could also be included into the model. This might at the same time also impact surface densities and radial drift velocities when meteoritic parent body formation is still ongoing.
We have only modelled a single size of CAI grains simultaneously, even though CAIs exist in a size range from microns up to a centimeter. While we checked in section 3.2 how the final CAI abundances change for grains of different sizes (see Figure 28), each of these iterations still assumes that only that particular size of grain exists. A more realistic model would include a more complete sample of the CAI size distribution. This could be complicated to achieve however, because it is not clear whether CAIs of different sizes are created at the same time or with the same efficiency. It could equally well be that mostly large CAIs are created in the CAI Factory, with smaller CAIs being the result of later fragmentation in the disk.
The effects of photoevaporation could be especially important in high-mass star formation environments with a high FUV flux, as this could have a significant impact on the disk evolution, for example by truncating the disk through rapid mass loss at the outer edge. It is also a possibility that mass loss due to photoevaporation might keep the disk gravitationally stable for higher values of the cold rotation rate \(\Omega_{\rm c}\).
Finally, something that has also not been explored in this project is the possibility of a non-homogeneous distribution of matter in the molecular cloud, or a composition that changes over time.
## 5 Conclusion
The model we built shows that the solution that Desch et al (2018) proposed for the CAI storage problem, in which a pressure maximum created by Jupiter opening a gap in the disk traps CAIs in place, also works when taking into account that the solar protoplanetary disk formed out of a collapsing molecular cloud core. We find that CAIs are created during the infall phase and are then very efficiently transported outward by the combined effects of advection by the rapidly expanding gas, redistribution of matter due to possible gravitational instability and turbulent diffusion.
Our main focus was on a disk model massive enough to become gravitationally unstable. In this case, subsequent inward radial drift creates a double peak structure in the CAI abundance. As well as piling up in Jupiter's pressure bump, CAIs in the far outer disk start piling up when they drift into a region of higher gas surface density, decreasing their Stokes number and slowing them down. The two abundance peaks keep growing over the course of the simulation until eventually merging together, forming a broad (\(\pm\)100 AU) region with elevated CAI abundances starting just behind Jupiter, where carbonaceous chondrites are then assumed to have formed. An interesting result from this extended period of inward radial drift is that the presence of CAIs in the disk after 4-5 Myr is a natural consequence of the infall phase, and does not actually require Jupiter as an explanation. In the meantime, the inner disk drains of CAIs as they drift into the Sun, leaving a much lower level of abundances in the region where ordinary and enstatatic chondrites then form.
For input parameters that do not lead to the gravitational instability, the final CAI abundance profile in the disk looks qualitatively similar. The abundance peak is less wide however, as the disk is naturally smaller, and no double peak structure is observed.
We find that the results of our model do not strongly depend on most parameter choices. This applies even to some parameters that the DKA18 model is more sensitive to, such as (in the gravitationally unstable case) the time and location where Jupiter forms. The molecular cloud properties seem to have the largest quantitative impact on the CAI abundance. More importantly, the general shape of the CAI abundance profile is not disrupted by most parameter variations, so the Jupiter solution works for any sufficiently large dust particle.
The presence of multiple planet gaps due to the other gas planets in our Solar System is not necessarily an impediment for the CAIs to drift back in towards Jupiter, as we showed that there are at least reasonable parameter choices that cause additional planet gaps to let through significant amounts of CAIs.
Quantitatively, the CAI abundances our model predicts are quite uncertain, not only due to parameter uncertainty, but also due to simplifications in the model, such as the singular CAI grain size. Qualitatively however, we are confident that the described results are authentic.
###### Acknowledgements.
We thank the referee, Steve Desch, for an extensive and very valuable referee report which greatly helped us to improve the paper.
|
2310.00119 | Fewshot learning on global multimodal embeddings for earth observation
tasks | In this work we pretrain a CLIP/ViT based model using three different
modalities of satellite imagery across five AOIs covering over ~10\% of Earth's
total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar
amplitude and interferometric coherence. This model uses $\sim 250$ M
parameters. Then, we use the embeddings produced for each modality with a
classical machine learning method to attempt different downstream tasks for
earth observation related to vegetation, built up surface, croplands and
permanent water. We consistently show how we reduce the need for labeled data
by 99\%, so that with ~200-500 randomly selected labeled examples (around
4K-10K km$^2$) we reach performance levels analogous to those achieved with the
full labeled datasets (about 150K image chips or 3M km$^2$ in each area of
interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to
think that the model has captured significant earth features useful in a wide
variety of scenarios. To enhance our model's usability in practice, its
architecture allows inference in contexts with missing modalities and even
missing channels within each modality. Additionally, we visually show that this
embedding space, obtained with no labels, is sensible to the different earth
features represented by the labelled datasets we selected. | Matt Allen, Francisco Dorr, Joseph A. Gallego-Mejia, Laura Martínez-Ferrer, Anna Jungbluth, Freddie Kalaitzis, Raúl Ramos-Pollán | 2023-09-29T20:15:52Z | http://arxiv.org/abs/2310.00119v2 | # Fewshot learning on global multimodal embeddings
###### Abstract
In this work we pretrain a CLIP/ViT based model using three different modalities of satellite imagery across five AOIs covering over 10% of Earth's total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar amplitude and interferometric coherence. This model uses \(\sim 250\) M parameters. Then, we use the embeddings produced for each modality with a classical machine learning method to attempt different downstream tasks for earth observation related to vegetation, built up surface, croplands and permanent water. We consistently show how we reduce the need for labeled data by 99%, so that with 200-500 randomly selected labeled examples (around 4K-10K km\({}^{2}\)) we reach performance levels analogous to those achieved with the full labeled datasets (about 150K image chips or 3M km\({}^{2}\) in each area of interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to think that the model has captured significant earth features useful in a wide variety of scenarios. To enhance our model's usability in practice, its architecture allows inference in contexts with missing modalities and even missing channels within each modality. Additionally, we visually show that this embedding space, obtained with no labels, is sensible to the different earth features represented by the labelled datasets we selected.
## 1 Introduction
Earth Observation (EO) has made remarkable progress with the rise of deep learning (DL) methods in recent years, and this potential is fueled by the ever increasing availability of satellite imagery [1]. According to the UCS Satellite Database1, as of May 2022 there were 470 optical satellites in orbit, 102 radar satellites and 62 tagged as producing some form of imaging (hyperspectral, multispectral),
among others. Within ESA's Sentinel missions alone, 80 PB of user-level data were downloaded during 2021 [2].
However, it is well known that DL methods are overwhelmingly hungry for labeled data, and one of the main hurdles to effectively exploit DL methods in EO is its endemic scarcity [3]. Even if new EO annotated datasets are published regularly, the time and effort involved cannot keep up with the amount of new data produced by present and future orbiting missions. It is in this context that methods that can significantly reduce the need for labeled data become valuable. See Section 2.
CLIP [4] is an SSL method that constrastively learns how to align the representations of image/text pairs collected from the Internet, allowing it to deal with different modalities of the same reality within the same embedding space for a variety multi-modal applications including detection, captioning, VQA and conditional image generation among others. CLIP's architecture, heavily based on Vision Transformers (ViT) [5], has been applied to merge multimodal representations beyond text and in different domains [6], [7], [8]. CLIP like models are also starting to appear in satellite imagery under different scenarios for temporal and multi-spectral imagery [9], temporal multimodal pixel wise [10] or a multispectral contrastive model for Landsat imagery [11].
In this work we built a three tower CLIP architecture, feed them with Sentinel 2 RGB optical imagery, Sentinel 1 amplitude and Sentinel 1 interferometric coherence, and use the produced embeddings in several downstream classification tasks, representing different earth features. We show how, in this representation space, a small fraction of data is enough to obtain full-dataset level performance, reducing by two orders of magnitude the need for labeled data.
This paper is structured as follows. Section 2 discusses related previous works. Section 3 describes the Areas of Interest (AOI) used, the input imagery and downstream labels. Section 4 describes the SSL architecture and training procedure, together with the downstream task. Section 5 shows results and visualization and in Section 6 we draw some conclusions.
## 2 Previous works
A number of approaches have been developed to address the labeled data scarcity challenge, including a variety of methods under self supervised learning (SSL) [3] and weakly supervised methods [12], among others, more often that not blended into foundation models [13]. Weakly supervised methods consider a range of scenarios where labels are noisy, incomplete, inexact or inaccurate and have also been applied in Earth Observation [14]. For instance, the teams on the data fusion contest [15] attempt to produce fine grained semantic segmentation maps for land cover when only low resolution reference data is available. Or also, in [16] a transfer learning method is used to pre-train a model over a region with large label density, to finetune it somewhere else with very few labels.
The success of foundation models in language tasks is still hard to translate to earth observation scenarios [17], but there are convincing works pre-training models with large amounts of geospatial data with the expectation to be useful in a wide variety of downstream tasks, see [18], [19] or [20]. Our work contributes in such direction by explicitly considering multimodality in the pretraining step while allowing downstream applications to use function even if not all modalities or channels are available. Given the variability of EO data across time and geographical locations, we believe this is a key step to enhance the practical applicability of general pretrained models in EO.
## 3 Data and downstream tasks
Input modalitiesWe use three modalities for input data, taking observations during the first three months of 2020, obtained from the Sentinel-1 and Sentinel-2 ESA missions. Sentinel-1 is a Synthetic Aperture Radar (SAR) sensor, for which we use amplitude and coherence, whereas Sentinel-2 is an optical satellite. The intuition here is that both missions complement each other, offering different perspectives on the same earth features which are necessarily correlated. For each AOI (see below) we built a grid containing tiles (image chips) of size 448m \(\times\) 448m. For Sentinel-2 optical data (s2rgbm) we use only the three RGB channels which we average per month (thus, 9 channels). For Sentinel-1 SAR amplitude (s1grdm) we use the vv and vh polarizations, plus their logarithmic difference, taking also their average per month (thus, 9 channels). Both Sentinel-2 and Sentinel-1 amplitude were tiled
from Google Earth Engine using geetiles2 which provides a 10m resolution and thus each chip is 448x448 pixels. We obtained Sentinel-1 interferometric coherence (gunv) from ARIA Sentinel-1 Geocoded Unwrapped Interferograms database [21] as available through Alaska's Satellite Facility3, using sartiles4. Since interferometric coherence is built using pairs of Sentinel-1 observations we selected the pairs available whose second observation was within 2020Q1 and the first one at most 48 days before. Thus, we have a potentially variable number of channels in each tile, depending on the number of interferometric pairs which could be formed. Its resolution is around 90m per pixel, and we upsample the image chips to match the 448x448 pixel size of s1grdm and s2rgbm.
Footnote 2: [https://github.com/rramosp/geetiles](https://github.com/rramosp/geetiles)
Footnote 3: [https://asf.alaska.edu/data-sets/derived-data-sets/sentinel-1-interferograms/](https://asf.alaska.edu/data-sets/derived-data-sets/sentinel-1-interferograms/)
Footnote 4: [https://github.com/rramosp/sartiles](https://github.com/rramosp/sartiles)
Footnote 5: [https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B](https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B)
Footnote 6: [https://ghsl.jrc.ec.europa.eu/download.php?ds=bu](https://ghsl.jrc.ec.europa.eu/download.php?ds=bu)
Areas of InterestWe defined five AOIs covering regions in Conterminous United States (CONUS, 167K image chips), South America (83K chips), Pakistan and India (Pakin, 147K chips), China (285K chips) and the Middle East (163K chips), as illustrated in Fig. 1. The AOIs extent is determined by the 2020 coverage of the gunw dataset. Observe that we do geographical aware splits into train (60%), validation (20%) and test (20%) to avoid as much as possible data leakage from contiguous image chips being in different splits.
Downstream tasksWe selected four use cases with global label coverage so that we could experiment on ablations with an increasing number of available labels. **Vegetation estimation**: we used the MOD44B.006 Terra vegetation continuous fields yearly dataset for 2020, focusing on the tree percentage estimation at 250m per pixel5. **Built Up Surface**, the Global Human Settlement Layer Built-Up surface dataset from the Joint Research Center of the European Commission6 for 2020 at 100m per pixel. **Croplands**, the ESA World Cover 2020 class representing croplands [22] at a 10m/pixel resolution. **Permanent water**, the ESA World Cover 2020 class representing permanent water bodies [22] at a 10m/pixel resolution. The JRC dataset was downloaded from their site and tiled using sartiles, whereas the rest were downloaded and tiled from Google Earth Engine using geetiles
Footnote 5: [https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B](https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B)
Footnote 6: [https://ghsl.jrc.ec.europa.eu/download.php?ds=bu](https://ghsl.jrc.ec.europa.eu/download.php?ds=bu)
For each dataset we define a binary classification task to predict the mean value per chip, thresholded on each AOI so that we get two balanced classes. Within the same task, this threshold is usually different for each AOI as they have different distributions as shown in Fig 2. So, for instance, for
Figure 1: Areas of Interest (AOIs) used in this study. Bands indicate the splits for train (yellow), validation (blue) and test (pink). In total there are 167K image chips for CONUS, 163K chips for Middle East, 147K chips for Pakistan-India, 285K chips for China and 83K chips for South America, which aggregates to 845K chips covering a surface of 16.9M km.
vegetation percentage** we set forth to predict whether an image chip has high or low vegetation, for **builtup** surface we predict whether a chip has high or low built surface, etc.
## 4 Method
### Model pretraining
We use a Self Supervised Learning approach using Visual Transformers (ViT) and a CLIP based loss architecture, where we have one ViT tower per input modality (s1grdm, s2rgbm and gunw). This architecture produces an embedding for each input image and modality and pushes embeddings of different modalities on the same chip to be similar, and others to be different. See Figure 3. In practice we are bound to occasional unavailability of particular channels: sometimes vv or vh are not available on Sentinel 1, clouds occasionally hinder Sentinel 2, and the number of interferometric pairs formed for SAR coherence is not always the same. To cope with this, each of the ViT accepts a single channel structure, and we select randomly one single channel of each input modality to feed each ViT at each inference request or training step. Besides enhancing the usability of the pretrained model for downstream tasks to noisy or missing data scenarios, we observed that this setup produces more robust outputs, probably due to the input noise induced by this procedure being compensated by correlations between modalities.
Since we tried different ViT configurations and embedding sizes, we use the train data split for training the model and the validation split to select the best self supervised model according to the CLIP loss. Test data is only using to measure the results presented here. Our final ViT configuration produces an embedding of size 768 for each input image chip in each modality, containing 259M trainable parameters. Train data amounts to about 500K image chips, equivalent to some 10M km\({}^{2}\) and taking about 100 hours of training time on an Nvidia DGX architecture with 8 A100 GPUs, 250 CPU cores and 2TB RAM.
### Downstream tasks
For each of the five AOIs and each modality we train a Random Forest to binary classify whether the mean value of each measure (vegetation, built surface, croplands and permanent water) is below or above the median. We use the embeddings representation for each modality as produced by the pretrained model which have a size of 768 components. We create an additional modality by concatenating all three modalities and, thus, produce a vector of size 2304 for each image chip. We do an ablation on the size of the sample taken from the train dataset with 5, 10, 100, 250, 500, 1000, 5000, 20000 and the full dataset. The sample is random uniform across all chips on the training split within each AOI.
We therefore train (3+1) modalities \(\times\) 5 AOIs \(\times\) 9 train dataset sizes. We use validation data to select the overall best Random Forest configuration (50 estimators and 7 max depth), and then we measure the accuracy on the test data. This is the only time that test data is used. Observe that train data is both used to train the pretrained model and the downstream Random Forest. We repeat this procedure 10 times and report the mean value of each experiment set.
Figure 2: Distribution of labels on each downstream task and AOI shown as a quantile plot. Observe that most tiles do not contain built surface or permanent waters
## 5 Results
### Ablations
Fig. 4 shows the overall accuracy results for our experimentation sets. Recall that we are doing a balanced binary classification task, with a different threshold in each AOI to ensure this balance, thus, reporting accuracy is straightforward. Observe that different tasks have different degrees of difficulty for different modalities. It is interesting to see that, in general, slgrdm embeddings perform better than the rest. Also, concatenating modalities embeddings (modsconcat if Fig. 4) seems to marginally improve overall results. We take as reference the accuracy obtained when using the full dataset, and measure how far we are from it in each experiment. Black dots in Fig. 4 show when the experiment produces an accuracy of at least 95% that of the one obtained with the full labeled dataset. This happens always with less than 500 image chips, and most of the times with less than 250. Considering an average training dataset size of 150K. This means that with only 0.3% of train data (3 per thousand) we can attain 95% of the top performance. The standard deviation was <0.05 when we used 50 or less shots, and <0.01 with larger datasets, so we did not include it in Fig. 4 for clarity.
### Embeddings
Finally, Fig. 5 shows a 2D TSNE reduction of the embeddings obtained for each modality (columns), colored by the downstream task log mean value before thresholding for binary classification for each downstream task (rows). Observe that the labels are not used to compute neither the embeddings, nor the 2D TSNE position. And we do still get clear positional patterns where similar values of the downstream tasks cluster together. We find this significant, as it illustrates how the embeddings do capture terrain features which are useful on different downstream tasks. Although somewhat subtle, observe as well how, for the same task, different modalities separate clusters value a bit better than others. Fig. 6 shows a couple of example images in the three modalities.
## 6 Conclusion
This work showed the effectiveness of multimodal models pretrained with large amounts of planetary data to reduce the number of required labeled examples in different downstream earth observation classification tasks. The reduction of the required amount of labeled data reaches the orders of 99%. We also run our experiments with smaller pretrained ViT architectures with 11M to 100M parameters and embeddings of size 192 and 364. Although the combined CLIP loss is usually similar to the one
Figure 3: Architecture of our CLIP-based model with three input modalities and separate ViT encoder for each modality. Similarity is measured for each pair of modalities and then averaged. Like the original CLIP, within the same batch, our loss encourages similarity of different modalities of the same locations to have similar encodings, and from other locations to be different. Observe as well how our encoders are single channel, operating on whatever channel was randomly selected for each modality.
obtained with our 250M parameter / 768 encoding size model, the performance of the downstream tasks is degraded, even if it preserves the 95% relative performance as described earlier. We also believe that multimodality settings such as this one allow models to leverage the complementarity or correlations of the same earth features as being observed by different sensors. This leads us to plan future work with planetary wide datasets and larger models.
## 7 Acknowledgements
This work has been enabled by Frontier Development Lab Europe ([https://fdleurope.org](https://fdleurope.org)) a public / private partnership between the European Space Agency (ESA), Trillium Technologies, the University of Oxford and leaders in commercial AI supported by Google Cloud and Nvidia, developing open science for all Humankind. L.M-F. was supported by the European Research Council (ERC) Synergy Grant "Understanding and Modelling the Earth System with Machine Learning (USMILE)" under the Horizon 2020 research and innovation programme (Grant agreement No. 855187). M. J. A. was supported by the UKRI Centre for Doctoral Training in Application of Artificial Intelligence to the study of Environmental Risks [EP/S022961/1], and additionally by Trinity Hall, Cambridge. We are also indebted to Nicolas Longepe, Carlos Lopez-Martinez, Fabio A. Gonzalez Osorio, Samuel Brancroft, Emma Hatton, Alison Lowndes, Alistair Francis, Ioanna Bouri and the rest of reviewers during 2023 FDL-Europe sprint.
Figure 4: Accuracy on binary classification for each downstream task, AOI and modality. The x-axis is non-linear and represents the number of image chip embeddings used to train a model. Dots signal the minimum number of training image chips which with 95% of top accuracy for each task is achieved. Observe that in the vast majority of cases, with less than 250 labeled image chip we can achieve at least 95% of the accuracy obtained with the full training dataset of labeled images. Training dataset sizes ranges from 50K in South America to 171K in China (60% of the total image chips in each AOI). Accuracy is measured on the full test split (20% of data).
Figure 5: Embeddings for each AOI and modality projected to a TSNE 2D space for visualization and colored with each downstream task label. Each dot correspond to one image chip projected to this space. These embeddings are trained and computed unsupervisedly with no label information and yet they are sensible to different land features as represented by each downstream task. Scale is logarithmic to better appreciate the value ranges of labels.
Figure 6: Location in the s1grdm 2D TSNE embedding space for CONUS of two sample image chips with different vegetation percentage value. Different colouring from 5 simply signals a different experimental run. |
2303.00021 | Quantum equilibration and measurements -- bounds on speeds, Lyapunov
exponents, and transport coefficients obtained from the uncertainty relations
and their comparison with experimental data | We discuss our recent study of local quantum mechanical uncertainty relations
in quantum many body systems. These lead to fundamental bounds for quantities
such as the speed, acceleration, relaxation times, spatial gradients and the
Lyapunov exponents. We additionally obtain bounds on various transport
coefficients like the viscosity, the diffusion constant, and the thermal
conductivity. Some of these bounds are related to earlier conjectures, such as
the bound on chaos by Maldacena, Shenker and Stanford while others are new. Our
approach is a direct way of obtaining exact bounds in fairly general settings.
We employ uncertainty relations for local quantities from which we strip off
irrelevant terms as much as possible, thereby removing non-local terms. To
gauge the utility of our bounds, we briefly compare their numerical values with
typical values available from experimental data. In various cases, approximate
simplified variants of the bounds that we obtain can become fairly tight, i.e.,
comparable to experimental values. These considerations lead to a minimal time
for thermal equilibrium to be achieved. Building on a conjectured relation
between quantum measurements and equilibration, our bounds, far more
speculatively, suggest a minimal time scale for measurements to stabilize to
equilibrium values. | Saurish Chakrabarty, Zohar Nussinov | 2023-02-28T19:00:27Z | http://arxiv.org/abs/2303.00021v1 | Quantum equilibration and measurements - bounds on speeds, Lyapunov exponents, and transport coefficients obtained from the uncertainty relations and their comparison with experimental data
###### Abstract
We discuss our recent study of _local quantum mechanical uncertainty relations_ in quantum many body systems. These lead to fundamental bounds for quantities such as the speed, acceleration, relaxation times, spatial gradients and the Lyapunov exponents. We additionally obtain bounds on various transport coefficients like the viscosity, the diffusion constant, and the thermal conductivity. Some of these bounds are related to earlier conjectures, such as the bound on chaos by Maldacena, Shenker and Stanford while others are new. Our approach is a direct way of obtaining exact bounds in fairly general settings. We employ uncertainty relations for local quantities from which we strip off irrelevant terms as much as possible, thereby removing non-local terms. To gauge the utility of our bounds, we briefly compare their numerical values with typical values available from experimental data. In various cases, approximate simplified variants of the bounds that we obtain can become fairly tight, \(i.e.\), comparable to experimental values. These considerations lead to a minimal time for thermal equilibrium to be achieved. Building on a _conjectured relation between quantum measurements and equilibration_, our bounds, far more speculatively, suggest a minimal time scale for measurements to stabilize to equilibrium values.
## 1 Introduction
In this work, we summarize our recent findings discussed in Refs. [12, 10] and briefly compare rigorous bounds on physical quantities that we obtained using our approach with experimental data. A large number of conjectured bounds on physical quantities have been advanced. These include an upper bound on the Lyapunov exponent [8], a lower bound on various lifetimes and relaxation rates [1, 5, 9, 11], a lower bound on the viscosity [4, 11, 17, 19], a lower bound on the ratio of shear viscosity and entropy density [6], and many others. It is notable that early works by Eyring [2, 3] and other pioneers on chemical reaction rates and intuitive proposed extensions implicitly suggest similar inequalities (although these have not been proposed as fundamental bounds). Our primary goal is to rigorously derive such bounds in broad settings using local variants of the quantum mechanical uncertainty relations.
## 2 Bounds from local uncertainty relations in many body systems
We consider a macroscopic system \(\Lambda\) of \(N_{\Lambda}\) particles, with a density matrix \(\rho_{\Lambda}\), whose dynamics is governed by the time independent Hamiltonian \(H_{\Lambda}\). The rate of change of an arbitrary local operator \(Q_{i}^{H}\) in the Heisenberg picture is \(\frac{dQ_{i}^{H}}{dt}=\frac{i}{\hbar}\left[H_{\Lambda},Q_{i}^{H}\right]\). The subscript \(i\) can be thought of as a particle index. We note that we can replace \(H_{\Lambda}\) in the above expression by the local Heisenberg picture Hamiltonian \(\tilde{H}_{i}^{H}\) which represents only the portion of \(H_{\Lambda}\) containing terms that do not commute with our chosen local operator \(Q_{i}^{H}\). With this, \(\frac{dQ_{i}^{H}}{dt}=\frac{i}{\hbar}\left[\tilde{H}_{i}^{H},Q_{i}^{H}\right]\). Next, we use the textbook type quantum uncertainty relation which is trivially provable to be valid \(\left(\text{via, }e.g.,\text{ the use of
Cauchy-Schwarz inequalities for Hilbert-Schmidt (trace) type inner products satisfying the inner product positive semi-definite property \(\left(\mathsf{Tr}(\rho_{A}A^{\dagger}A)\geq 0\right)\) associated with the density matrices \(\rho_{A}\) providing expectation values\(\left)\) for general mixed states, \(\sigma_{A}\ \sigma_{B}\geq\frac{1}{2}\left|\left\langle\left[A,B\right]\right\rangle \right|\). Here, \(A\) and \(B\) are any two operators and \(\sigma_{A}^{2}=\left\langle\left(A-\left\langle A\right\rangle\right)^{2}\right\rangle\), \(\left\langle A\right\rangle\equiv\mathrm{Tr}\left(\rho_{A}A\right)\). Using this, \(\left|\left\langle\frac{dQ_{i}^{H}}{dt}\right\rangle\right|\leq\frac{2}{\hbar} \sigma_{\tilde{H}_{i}^{H}}\sigma_{Q_{i}^{H}}\). Now we focus on the value of \(\sigma_{\tilde{H}_{i}^{H}}^{2}\) when averaged over the entire system and consider the particular case of \(\rho_{A}\) defining a macroscopic thermal system at a temperature \(T\) for which the variances may be evaluated. For a translationally invariant system in thermal equilibrium, the variance \(\left(\sigma_{\tilde{H}_{i}^{H}}\right)^{2}\equiv k_{B}T^{2}C_{v,i}\) (defining an effective local heat capacity \(C_{v,i}\)) assumes the same value of each \(i\). (The energy variance of the full many body Hamiltonian \(H_{A}\) is given by \(k_{B}T^{2}C_{v}^{\left(A\right)}\) with \(C_{v}^{\left(A\right)}\) the heat capacity of the global system \(\Lambda\).) Putting everything together,
\[\overline{\left(\frac{\left\langle\frac{dQ^{H}}{dt}\right\rangle^{2}}{\sigma_ {Q^{H}}^{2}}\right)}\leq\frac{4k_{B}T^{2}C_{v,i}}{\hbar^{2}}, \tag{1}\]
where \(\overline{X}\equiv\frac{1}{N_{A}}\sum\limits_{i=1}^{N_{A}}X_{i}\). Even though the right hand side of Eq. 1 is independent of the spatial index \(i\), we have kept it to underscore that \(C_{v,i}\) is an effective _local_ heat capacity.
### Upper bound on the relaxation rate (Lower bound on the relaxation time)
The left hand side of Eq. 1 is, dimensionally, the square of the relaxation rate associated with the operator \(Q_{i}^{H}\). This leads to a bound on the relaxation rate,
\[\tau_{Q}^{-1}\leq\frac{2T\sqrt{k_{B}C_{v,i}}}{\hbar}. \tag{2}\]
At high temperatures, when the equipartition theorem applies, \(i.e.\), \(C_{v,i}=\mathcal{O}(k_{B})\), this inequality becomes, \(\tau_{Q}^{-1}\leq\mathcal{O}\left(2k_{B}T/\hbar\right)\), implying that, \(\tau_{Q}\geq\mathcal{O}\left(\hbar/2k_{B}T\right)\).
### Upper bound on particle speeds and lower bounds on particle displacements
Choosing the operator \(Q_{i}^{H}\) in the above analysis to be the \(\alpha^{\mathrm{th}}\) Euclidean component of the displacement of a particle in the system, we get, \(\overline{\left(\left\langle\frac{dr_{\alpha}^{H}}{dt}\right\rangle^{2}\left/ \sigma_{r_{\alpha}^{H}}^{2}\right\rangle}\leq\frac{4k_{B}T^{2}C_{v,i}}{\hbar^{ 2}}\). Here, \(\tilde{H}_{i}^{H}=\frac{\left(p_{i\alpha}^{H}\right)^{2}}{2m}\), implying that if equipartition holds (at high temperatures), \(C_{v,i}=k_{B}/2\). If in addition, we assume that the fluctuation of the particle positions is slowly varying, \(i.e.\), all the particles have similar values of \(\sigma_{r_{\alpha}^{H}}\), then,
\[\sqrt{\overline{\left\langle\frac{dr_{\alpha}^{H}}{dt}\right\rangle^{2}}}\leq \frac{\sqrt{2}k_{B}T\sigma_{r_{B}^{H}}}{\hbar}. \tag{3}\]
A related bound for the expectation value of the square of the velocity components can also be obtained using a similar analysis. [12] Thus, at high temperatures, \(\overline{\left\langle\left(\frac{dr_{\alpha}^{H}}{dt}\right)^{2}\right\rangle }\leq\frac{2\left(k_{B}T\right)^{2}\sigma_{r_{B}^{H}}^{2}}{\hbar^{2}}\). The advantage of this relation is that in the classical limit, the left hand side takes the value \(\frac{k_{B}T}{m}\), implying that the fluctuation of the each component of a particle's position is bounded from below.
\[\sigma_{r_{\alpha}^{H}}^{2}\geq\frac{\hbar^{2}}{2mk_{B}T}=\frac{\lambda_{T}^{2 }}{4\pi}, \tag{4}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
\(\lambda_{T}\) being the thermal de Broglie wavelength.
Other bounds that can be obtained using similar analysis are summarized in Table 1. These have been simplified using semi-classical and other arguments in order to obtain expressions devoid of specific system details.
## 3 Quantum measurements and equilibration
In Refs. [12, 10], the Eigenstate Thermalization Hypothesis, associated entropy maximization, and other considerations were applied to the measurement problem. Here, the interactions \(H_{\sf device-Q}\) between a measuring device and a local microscopic quantity \(Q\) being measured were included in the full system Hamiltonian \(H_{A}\). It was illustrated that a time average of \(Q\) (over its equilibration time set by \(\tau_{Q}\)) is given by _eigenstate expectation values_ when the interactions in \(H_{\sf device-Q}\) are appreciable. That is, inasmuch as local measurements of \(Q\) are concerned [12, 10],
\[\rho_{\sf collapse}``="\rho_{\sf equil.}, \tag{5}\]
where \(\rho_{\sf collapse}\) is the density matrix associated with this short time averaged measurement and \(\rho_{\sf equil.}\) emphasizes that the latter short time average may be replaced by an average with the density matrix of the equilibrated system that includes the measurement device and the typical microscopic quantity \(Q\) being measured. Here, "\(=\)" highlights that this equality and density matrix are not associated with a bona fide "collapse" to an eigenstate of \(Q\) but rather to a time average over an interval which can be exceedingly short for a small local observable \(Q\) (see Table 1) for which equilibration may indeed typically be very rapid. Ref. [13] more recently raised a conjecture similar to the one of Eq. (5) that we earlier proposed in Refs. [12, 10].
## 4 Conclusions
Our local quantum uncertainty based bounds on the relaxation times in equilibrated quantum systems [12, 10] are intimately related to conjectured Matsubara like Planckian time scales [18] and do not hinge on the Lieb-Robinson [7] and related bounds [14] on the speed in which information may spread. These bounds may further relate to possible fundamental limits on measurement and equilibration times (a conjectured connection between measurement and equilibration was briefly reviewed). Our lower bound on the shear viscosity is closely connected to proposed bounds on the viscosity to entropy density ratio [6], and other viscosity bounds [19, 17, 16]. Our upper bound on the shear viscosity in equilibrated systems, that follows from the bound on the diffusion constant when the Stokes-Einstein relation applies is, like others reviewed here (e.g., those on general spatial gradients of general functions), new [12]. When applied to various observables, our bound on the Lyapunov exponent is slightly tighter than the celebrated conjectured chaos bound of Ref. [8]. Furthermore, our derivation uses a definition of the Lyapunov exponent similar to that in the the classical arena which does not rely on the use of regularized Out of Time Ordered Correlators (OTOC). When contrasted with experimental data for commonplace systems such as water and aluminum, our simplified bounds are relatively tight (see Table 1 and [12] (and further comparisons for the viscosity bound in [4, 17])). A comprehensive study further contrasting some of our other bounds (both exact and their approximate simplified variants) with experimental data will be illuminating.
|
2304.00086 | Machine Learning for Economics Research: When What and How? | This article provides a curated review of selected papers published in
prominent economics journals that use machine learning (ML) tools for research
and policy analysis. The review focuses on three key questions: (1) when ML is
used in economics, (2) what ML models are commonly preferred, and (3) how they
are used for economic applications. The review highlights that ML is
particularly used to process nontraditional and unstructured data, capture
strong nonlinearity, and improve prediction accuracy. Deep learning models are
suitable for nontraditional data, whereas ensemble learning models are
preferred for traditional datasets. While traditional econometric models may
suffice for analyzing low-complexity data, the increasing complexity of
economic data due to rapid digitalization and the growing literature suggests
that ML is becoming an essential addition to the econometrician's toolbox. | Ajit Desai | 2023-03-31T19:21:56Z | http://arxiv.org/abs/2304.00086v2 | # Machine Learning for Economics Research:
###### Abstract
This article provides a curated review of selected papers published in prominent economics journals that use machine learning (ML) tools for research and policy analysis. The review focuses on three key questions: (1) when ML is used in economics, (2) what ML models are commonly preferred, and (3) how they are used for economic applications. The review highlights that ML is particularly used to process nontraditional and unstructured data, capture strong nonlinearity, and improve prediction accuracy. Deep learning models are suitable for nontraditional data, whereas ensemble learning models are preferred for traditional datasets. While traditional econometric models may suffice for analyzing low-complexity data, the increasing complexity of economic data due to rapid digitalization and the growing literature suggests that ML is becoming an essential addition to the econometrician's toolbox.
Economics Econometrics Machine learning
_JEL Codes:_ A10, B23, C45, C55
## 1 Introduction
The economy is becoming increasingly digital, and as a result, the size and complexity of economic data are growing rapidly. This presents both opportunities and challenges for analysts who want to process and interpret this data to gain insights into economic phenomena. Machine learning (ML) which has emerged as a powerful tool for analyzing large and complex datasets across disciplines, has the potential to mitigate some of the challenges posed by the digitization of the economy for economics research and analysis. The ability of ML models to effectively process large volumes of diverse data could allow us to build more complex models. As a result, the use of ML has expanded in economics research.
The number of academic publications that use ML tools has increased significantly in the last few years, as depicted in Figure 1, which shows the number of articles published in ten leading economics journals that use ML. This trend is expected to continue as researchers explore new ways to apply ML techniques to a wide range of economic problems. Nevertheless, the suitability and applicability of these tools is not widely understood among economists and data scientists. To bridge this gap, in this article, we provide a curated review of selected papers published in prominent economics journals that employ ML tools. The objective of this review is to assist economists interested in leveraging ML tools for their research and analysis, as well as data scientists seeking to apply their skills to economic applications.2
Footnote 2: The article is motivated from the American Economic Association’s (AEA) continuing education session on Machine Learning and Big Data at the 2023 ASSA annual meeting [1].
The article aims to showcase the potential benefits of utilizing ML in economics research and policy analysis, while also offering suggestions on where, what and how to effectively apply ML models. It should be noted that the article takes a suggestive approach, rather than an explanatory one, with a particular focus on supervised learning methods commonly used for prediction problems such
as regression and classification. The article is organized into three main sections, each focusing on one key question: 1) When ML is used in economics? 2) What ML models are commonly preferred?, and 3) How ML is used for economic applications? Finally, we briefly discuss the limitations of machine learning in its current state.
The key lessons of the review are summarized as follows: First, ML models are used to process nontraditional and unstructured data such as text and images, to capture strong nonlinearity that is difficult to capture using traditional econometric models, and to improve prediction accuracy, extract new information, or automate feature extraction when dealing with large but traditional datasets. ML tools are probably not useful for cases with small data complexity, and the traditional econometric models will likely to suffice.
Second, the choice of ML model depends on the type of application and underlying data characteristics. For example, when conducting textual analysis, the latent Dirichlet allocation (LDA), which is an ML algorithm for probabilistic topic modelling, is commonly preferred. However, deep learning models such as Transformers are also employed for handling large text or audio data. Convolutional neural network models such as ConvNext are preferred for image or video datasets. Ensemble learning models are commonly employed for traditional datasets, and causal ML models are utilized when analysis focuses on causal inference.
Third, the usefulness of ML models can be greatly improved by tailoring them to the specific application, especially when dealing with large and complex but traditional data. This approach is more effective than using off-the-shelf tools. Moreover, pre-trained models with transfer learning can be advantageous when dealing with nontraditional but limited data and deep learning.3
Footnote 3: See Appendix A for more details on LDA model, Appendix B for Transformer model, Appendix C for ConvNext model, Appendix D for ensemble learning models, and Appendix E for transfer learning.
This review highlights the potential benefits of using new tools provided by ML in various areas of economics research, but also acknowledges the challenges that needed to overcome to effectively use it for economic analysis and research. For instance, ML models require large amounts of data and ample computational resources, which can be a limitation for some researchers because it can be difficult to obtain high-quality data, and data may be incomplete or biased. Additionally, ML models are prone to overfitting and can be challenging to interpret, which limits their utility. Moreover, most ML models do not have standard errors, and other statistical properties have not yet been well-defined, which can make it difficult to draw conclusions from the results. Therefore, caution is recommended when using ML models. Despite these limitations, our review suggests, ML is successfully employed alongside traditional econometrics tools to advance our understanding of economic systems. By combining the strengths of both fields, researchers can improve the accuracy and reliability of economic analyses to better inform policy decisions.
Figure 1: The number of publications over five years (between 2018-2022) in the leading economics journals that use ML. The data includes articles from the following ten journals: American Economic Review (AER), Econometrica, Journal of Economic Perspectives (JEP), Journal of Monetary Economics (JME), Journal of Political Economy (JPE), Journal of Econometrics (JoE), Quarterly Journal of Economics (QJE), Review of Economic Studies (RES), American Economic Journal (AJE): Macroeconomics and Microeconomics. The relevant papers are identified using the following search terms: Machine learning, Ensemble learning, Deep learning, Statistical learning, Reinforcement learning, and Natural language processing.
When ML is used in economics?
The literature suggest that the following three cases where the ML models could add value to economic research and analysis.
* To process non-traditional data, such as images, texts, audios, and videos.
* To capture nonlinearity which is difficult to capture using traditional models.
* To process traditional data at scale to improve prediction accuracy, extract new information, or automate feature extraction.
For instance, article [2] suggest that the big data, due to its sheer size and complexity, may require more powerful manipulation tools, which ML can offer. Also, we may have more potential predictors (or features) than appropriate for estimation in some cases, so we need to make some variable selections, where ML can help. Lastly, large datasets allow for more flexible relationships than simple linear models can capture. ML techniques are handy in those cases due to their ability to model intricate and nonlinear relationships potentially offering new insights.
Similarly, article [3] argue that ML not only provides new tools but also solves a different problem. They assert ML's success is largely due to its ability to discover the complex structure that was not specified in advance. They suggest applying ML to economics requires finding relevant tasks, for instance, where the focus is on increasing prediction accuracy or uncovering generalizable patterns from complex datasets. Also, article [4] point that the methods developed in the ML have been particularly successful in big data settings, where we observe information on a large number of units, many pieces of information on each unit, or both. The authors suggest that for using ML tools for economics research and analysis, researchers should clearly articulate their goals and why certain properties of ML algorithms may or may not be important.
### ML is used for processing non-traditional data.
Non-traditional datasets, such as images, text, and audio, can be difficult to process using traditional econometric models. In such cases, ML models can be used to extract valuable information that can be incorporated into traditional models to address economic questions.
For instance, article [5] published in QJE, the authors assess how transparency, a key feature of central bank design, affects monetary policy makers' deliberations, using an ML algorithm for probabilistic topic modelling. Similarly, article [6] published at JME, they use a large news corpus and ML algorithms to investigate the role played by the media in the expectations formation process of households, and in [7] published in JoE, uses the twitter data and ML model to measure inflation expectation. Likewise, articiel [8] published in AER, the authors use satellite data to measure GDP growth at the sub and supranational regions, in [9] also published in AER, the authors employed a computer vision algorithm that measures the perceived safety of streetscapes, and how strongly it is correlated with population density and household income, and in [10] published in AER, the authors use deep learning to detect emotions embedded in press conferences after the Federal Open Market Committee meeting and examine the influence of the detected emotions on financial markets.
### ML is used for capturing strong nonlinearity
The ML could be useful if the data and application contain strong nonlinearity, which is hard to capture using traditional approaches. For instance, in the article [11], published in QJE, the authors evaluate whether ML models can help improve judges' decisions on bail or no bail. Although the outcome is binary, this is a highly complex problem that demands processing complex data to make prudent decisions.
Similarly, in the article [12], published in JME, the authors use ML to solve dynamic economic models by casting them into nonlinear regression equations. Here ML is used to deal with multicollinearity and to perform the model reduction. Likewise, article [13], published in QJE, uses ML to test how effective physicians are at diagnosing heart attacks. Using a large and complex dataset available at the time of the physician's decision, they estimate the model to predict the outcome of testing to uncover potential sources of error in decisions.
ML is used for processing traditional data at scale to improve prediction accuracy or extract new information
ML could be useful for processing large and complex but traditional data sets with many variables. In such cases, the ML models can help to 1. improve prediction accuracy, 2. extract new information or 3. automate feature extraction. For instance, in the article [14], published in AER, the authors combine a data-rich environment with an ML model to provide new estimates of time-varying expectational errors embedded in survey responses and show that the ML can be productively deployed to correct errors in human judgment and improve predictive accuracy.
Similarly, in the article [15], published in JPE, the authors measure CEO behaviour types using high-frequency, high-dimensional diary data and an ML algorithm. In [16], published in JoE, the author uses ML for fraud detection in insurance claims using unstructured data comprising inputs of varying lengths and variables with many categories. They argue that ML alleviates these challenges that are otherwise hard for traditional methods. Likewise, in [17] published in RES, the authors suggest that the accuracy of ML-based data driven decision rule for consumer lending could be more accurate than examiner-based decisions.
ML is probably not useful for the cases where data complexity (which could be related to shape, size, collinearity, nonlinearity, etc.) is small, and traditional econometric models would likely suffice. However, if the data complexity increases, i.e., when dealing with big data, the value added by ML models could be higher after a certain threshold, as shown by the dotted line in Figure 2.
## 3 What ML models are commonly preferred?
With a variety of ML tools available, different models are better suited for different types of applications and data characteristics. This section will discuss which models are most effective for a given type of economic application.
### Deep learning models are used when dealing with nontraditional data
Natural language processing (or NLP) primarily relies upon processing textual data and has many applications in economics. For instance, it could be used for topic modelling or sentiment analysis. The model of choice for the topic modelling to quantify text is LDA proposed in [19]. It is an ML algorithm for probabilistic topic modelling that decomposes documents in terms of the fraction of time spent covering a variety of topics [5, 6].
An review of the use of text as data and ML methods for various economics applications is presented in [20]. The use of deep learning models for NLP is evolving rapidly, and various large language models (LLMs) could be used to process text data; However, _transformer_ models [21] are proven to be more useful to efficiently extract useful information from the textual data [1]. For instance, in [10], they use transformer to detect emotions embedded in press conferences after the Federal Open Market Committee meeting for sentiment analysis. Moreover, almost all large general-purpose LLMs, including GPT-3 and chatGPT, are trained using Generative Pre-trained Transformer [22].
Figure 2: Schematic diagram representing the relative merits of ML and traditional econometric methods. The plot is adapted from [1, 18].
An interesting application of computer vision models in economics is using a broad set of satellite images or remote sensing data for analysis. For instance, in the article [8], the authors use satellite data to measure GDP growth at the sub and supranational regions. Similarly, in [23], using satellite data and machine learning, they develop high-resolution predictors of household income, wealth, and poverty rates in five African countries.
A review of the literature, presenting opportunities and challenges in using satellite data and ML in economics, is documented in [24]. It concludes that such models have the potential to perform economic analysis at spatial and temporal frequencies that are an order of magnitude higher than those that are commonly available. Various deep learning models can be employed to extract useful information from images, but the _ConvNext_ model [25] is evolving to be more successful in efficiently processing image data sets [1]. However, the transformers can also be effectively employed for image processing [26].
### Ensemble learning models are used when dealing with traditional data
Ensemble learning models could be useful if the data size is small but includes many features and if there is collinearity or nonlinearity, which is hard to model. For instance, the article [3] compares the performance of different ML models in predicting house prices and demonstrates that the nonparametric ML algorithms, such as random forests, can do significantly better than ordinary least squares, even at moderate sample sizes and with a limited number of covariates.
Similarly, article [13] use ensemble learning models which combine gradient-boosted trees and LASSO to study physicians' decision-making to uncover the potential sources of errors. Also, in [27], they propose generalized random forests, a method for nonparametric statistical estimation based on random forests. This can be used for three statistical tasks: nonparametric quantile regression, conditional average partial effect estimation and heterogeneous treatment effect estimation via instrumental variables. Moreover, the ensemble learning models are popular in macroeconomic prediction, as demonstrated by many articles [28, 29, 30, 31, 32]).
### Causal ML models are used when the focus is on causal inference
The causal ML could be helpful when the primary objective is to make causal inferences, but the dataset is big and complex. For instance, in [33], the authors develop a nonparametric causal forest for estimating heterogeneous treatment effects that extends the random forest, an ML algorithm. They demonstrate that any type of random forest, including classification and regression forests, can be used to provide valid statistical inferences. In experimenting with these models, they find causal forests to be substantially more powerful than classical methods--especially in the presence of irrelevant covariates.
Similarly, the article [34] use ML for estimating heterogeneity in causal effects in experimental and observational studies. Their approach is tailored for applications where there may be many attributes of a unit relative to the number of units observed and where the functional form of the relationship between treatment effects and the attributes is unknown. It enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without sparsity assumptions. The applicability of these methods is demonstrated in [35] for predicting treatment heterogeneity for summer jobs application.
## 4 How ML is used for economic applications
Pre-trained models and transfer learning are recommended, especially while using deep-learning models [1]. Deep Learning based approaches are state-of-the-art for processing non-traditional data; However, there are many difficulties in using them for economic applications. For instance, large data and ample computational resources are often necessary to train these models. Both of which are scarce resources in many economic applications. Moreover, these models are notoriously convoluted, and many economics researchers would benefit from using these methods but lack the technical background to implement them from scratch ([36]). Therefore, transfer learning, i.e., large models pre-trained on a similar application (for instance, large language or computer vision models), could be adapted for similar economic applications.
Of-the-shelf ensemble learning models are useful when dealing with panel data with strong collinearity or nonlinearity, but it is recommended to adapt these models to suit the task. For instance, in [27], they adapt the popular random forests algorithm, which then can be used for nonparametric quantile regression or estimating average treatment effects. Similarly, in [16] propose an explainable attention network for fraud detection in claims management by adapting a standard neural network and demonstrate that the adapted model performs better than off-the-shelf. Likewise, in [37], the author proposed macroeconomic random forest, an
algorithm adapting the canonical ML tool. They show that the adapted model forecasts better than off-the-shelf ML algorithms and traditional econometric approaches, and it can be interpreted.
There are many other examples where the model or the procedure to train the model is adapted to improve the performance. For instance, [34] adopt the standard cross-validation approach, which then enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size. Similarly, a variation of cross-validation approaches is proposed in [30] to improve the prediction performance of the macroeconomic nowcasting model during economic crisis periods.
A few other practical recommendations discussed in the AEA's course are: 1. Bigger, both in terms of data and model size, is better when using deep learning models for processing non-traditional data. 2. Due to its speed and community support, the _Python_ programming language is preferred over languages for applied ML. 3. When training large ML models, it is suggested to use _Unix_-based operating systems over windows systems.
## 5 Other emerging applications
Other types of ML approaches, such as unsupervised learning and reinforcement learning, are yet to make a notable impact in economics research. However, there are some initial applications of these approaches in the literature. For instance, in [38] uses an unsupervised dimension reduction model called autoencoder neural network for asset pricing models. Similarly, in [39] autoencoder-based unsupervised model is used for anomaly detection in high-value payments systems. Also in [40] using data on nearly 40 million Google keyword auctions and unsupervised machine learning algorithms to cluster keywords into thematic groups serving as relevant markets.
Reinforcement learning (RL) models could be employed to model complex strategic decisions arising in many economic applications. For instance, in [41], the authors use RL to estimate optimal decision rules of banks interacting in high-value payment systems. Similarly, in [42], deep RL is used to solve dynamic stochastic general equilibrium models for adaptive learning at the interaction of monetary and fiscal policy, and [43], the authors use RL to optimize monetary policy decisions. Likewise, in [44], the authors use a RL approach to learn dynamic tax policies.
## 6 Limitations
The key limitations of ML for economics research and analysis are outlined below:
* Large data sets and ample computational resources are often necessary to train ML models--especially the deep learning models.
* The ML models, owing to their flexibility, are easy to overfit, and their complexity makes them hard to interpret--which is crucial for many economic applications.
* Most ML models have no standard errors and asymptotic properties--which could be essential for many economic applications.
* ML models can be biased if the data used to train these models is of low quality and biased.
The literature is evolving to address these challenges; however, some are hard and could take longer to mitigate. For instance, we have limited data in many economic applications, which restricts the applicability of large ML models. This could be potentially mitigated in certain applications as the economy becomes more digital, allowing us to gather more data at a much higher frequency than traditional economics data sets.
The interpretability or explainability of models is another major challenge in using ML in economics. The researchers are making progress toward overcoming these challenges. For instance, one approach recently developed to mitigate interpretability issues is by using the Shapley-value-based methodologies such as developed in [45], and [46]. These methods are useful for macroeconomic prediction models in [46, 30, 47, 32]. However, note that, although such methods are based on game theory, they do not provide any optimal statistical criterion, and asymptotics for such approaches are not available yet. To overcome that, for instance, in the recent papers [48, 49], the authors propose ML-based mixed data sampling and develop the asymptotics in the context of linear regularized regressions. However, much progress needs to be made to use such asymptotic analysis for popular nonlinear ML approaches.
Conclusions
This concise review highlights that ML is increasingly used for economics research and policy analysis, particularly for analyzing non-traditional data, capturing nonlinearity, and improving prediction accuracy. Importantly, ML can complement traditional econometric tools by identifying complex relationships and patterns in data that can be incorporated into econometric models. As the digital economy and economic data continue to grow in complexity, ML remains a valuable tool for economic analysis. However, a few limitations need to be addressed to improve the utility of ML models, and the literature is progressing toward mitigating those challenges.
Lastly, in Figure 3, we present the word clouds generated from the titles and abstracts of the articles in our dataset. These word clouds illustrate the frequency of certain terms, with larger font sizes indicating more frequent usage. For example, as shown in the word clouds, the terms "machine" and "learning" are prominently featured in both titles and abstracts, highlighting its relevance in those articles. This is followed by words such as "data," "effect," and "decision".
|
2309.16618 | Revisiting Neural Program Smoothing for Fuzzing | Testing with randomly generated inputs (fuzzing) has gained significant
traction due to its capacity to expose program vulnerabilities automatically.
Fuzz testing campaigns generate large amounts of data, making them ideal for
the application of machine learning (ML). Neural program smoothing (NPS), a
specific family of ML-guided fuzzers, aims to use a neural network as a smooth
approximation of the program target for new test case generation.
In this paper, we conduct the most extensive evaluation of NPS fuzzers
against standard gray-box fuzzers (>11 CPU years and >5.5 GPU years), and make
the following contributions: (1) We find that the original performance claims
for NPS fuzzers do not hold; a gap we relate to fundamental, implementation,
and experimental limitations of prior works. (2) We contribute the first
in-depth analysis of the contribution of machine learning and gradient-based
mutations in NPS. (3) We implement Neuzz++, which shows that addressing the
practical limitations of NPS fuzzers improves performance, but that standard
gray-box fuzzers almost always surpass NPS-based fuzzers. (4) As a consequence,
we propose new guidelines targeted at benchmarking fuzzing based on machine
learning, and present MLFuzz, a platform with GPU access for easy and
reproducible evaluation of ML-based fuzzers. Neuzz++, MLFuzz, and all our data
are public. | Maria-Irina Nicolae, Max Eisele, Andreas Zeller | 2023-09-28T17:17:11Z | http://arxiv.org/abs/2309.16618v1 | # Revisiting Neural Program Smoothing for Fuzzing
###### Abstract.
Testing with randomly generated inputs (fuzzing) has gained significant traction due to its capacity to expose program vulnerabilities automatically. Fuzz testing campaigns generate large amounts of data, making them ideal for the application of machine learning (ML). _Neural program smoothing_, a specific family of ML-guided fuzzers, aims to use a neural network as a smooth approximation of the program target for new test case generation.
In this paper, we conduct the most extensive _evaluation_ of neural program smoothing (NPS) fuzzers against standard gray-box fuzzers (>11 CPU years and >5.5 GPU years), and make the following contributions: (1) We find that the original performance claims for NPS fuzzers _do not hold_; a gap we relate to fundamental, implementation, and experimental limitations of prior works. (2) We contribute the first _in-depth analysis_ of the contribution of machine learning and gradient-based mutations in NPS. (3) We implement Neuzz++, which shows that addressing the practical limitations of NPS fuzzers improves performance, but that _standard gray-box fuzzers almost always surpass NPS-based fuzzers_. (4) As a consequence, we propose _new guidelines_ targeted at benchmarking fuzzing based on machine learning, and present MLFuzz, a platform with GPU access for easy and reproducible evaluation of ML-based fuzzers. Neuzz++, MLFuzz, and all our data are public.
fuzzing, machine learning, neural networks, neural program smoothing +
Footnote †: ccs: Information systems for neural program smoothing
+
Footnote †: ccs: Information systems for neural program smoothing
the results from the original papers_. We explain this performance gap by outdated or incorrect experimental practices in prior work.
3. We reimplement Neuzz as a custom mutator for AFL++ and show that fixing practical limitations of NPS significantly improves fuzzing performance. Nevertheless, we find that neural program smoothing methods _are outperformed by state-of-the-art gray-box fuzzers_, despite their use of additional computation resources.
4. Based on our findings, we propose _better-suited guidelines_ for evaluating ML-enhanced fuzzing, and present _MLFuzz_, the first fuzzing benchmarking framework with GPU support dedicated to ML-based fuzzing. MLFuzz allows for easy, reproducible evaluation of fuzzers with or without machine learning, similar to standard practices used by FuzzBench (FuzzBench, 2018).
The remainder of the paper is structured as follows. Section2 introduces prior work on coverage guided fuzzing and neural program smoothing, before tackling our main analysis on limitations of neural program smoothing in Section3. Section4 presents our implementation of NPS fuzzing and the benchmarking platform. Section5 covers experiments, followed by new experimental guidelines in Section6. We conclude this work in Section7. All our results and code are publicly available (Section8).
## 2. Background
Coverage-guided fuzzing.Coverage-guided fuzzers explore the input space of a program starting from a few sample inputs called seeds. They mutate the seeds into new test cases based on a _fitness criterion_, which rewards reaching new code coverage obtained by gray-box access through binary instrumentation. Test cases that increase coverage are kept in the corpus to be evolved further. Over time, the input corpus and the total code coverage grow. During execution, the fuzzer checks the target program for unwanted behavior, notably crashes and hangs. Popular coverage-guided fuzzers are American Fuzzy Log (AFL) (Liang et al., 2017), its successor AFL++ (Liang et al., 2018), and libFuzzer (Liang et al., 2019). Alongside basic mutations, most gray-box fuzzers use the _havoc_ mutation strategy, where a fixed number of randomly chosen atomic mutations are chained to a more complex mutation (Liang et al., 2018). Motivated by the success of havoc in modern fuzzers, \(\text{Havoc}_{\text{MB}}\)(Liang et al., 2019) was designed to implement the havoc strategy as a two-layer multi-armed bandit (Beng et al., 2019). Despite the trivial reward function used by the bandit, \(\text{Havoc}_{\text{MB}}\) claims to significantly improve code coverage over random havoc in extensive benchmarks.
Fuzzing with machine learning.ML has been applied to various tasks in the fuzzing loop. Neural byte sieve (Srivastava et al., 2017) experiments with multiple types of recurrent neural networks that learn to predict optimal locations in the input files to perform mutations. Angora (Krizhevsky et al., 2014) uses byte-level taint tracking and gradient descent to mutate test cases towards new coverage. FuzzerGym (Fuzzer et al., 2019) and Bortinger et al. (Bortinger et al., 2020) formulate fuzzing as a reinforcement learning problem that optimizes coverage. In parallel to mutation generation, machine learning is naturally fit for generating test cases directly. Skyfire (Srivastava et al., 2017) learns probabilistic grammars for seed generation. Learn&Fuzz (Liang et al., 2018) uses a sequence-to-sequence model (Liang et al., 2019) to implicitly learn a grammar to produce new test cases. GANFuzz (Srivastava et al., 2017) uses generative adversarial networks (GANs) (Krizhevsky et al., 2014) to do the same for protocols. DeepFuzz (Krizhevsky et al., 2014) learns to generate valid C programs based on a sequence-to-sequence model for compiler fuzz testing. The application of ML to fuzzing is covered more extensively in (Srivastava et al., 2017; Bortinger et al., 2020).
Neural program smoothing.Program smoothing (Han et al., 2017; Han et al., 2017) was initially introduced as a way to facilitate program analysis and overcome the challenges introduced by program discontinuities. Among the uses of machine learning in fuzzing, neural program smoothing is one of the most recent and popular methods, due to its great performance in the original studies. Neuzz (Neuzz, 2018) trains a neural network to serve as a smooth approximation of the original program in terms of code coverage (Figure1). First, all test cases (2) from the corpus (1) are executed on the instrumented program (3) to obtain their individual code coverage (4), i.e. edge coverage from afl-showmap. The respective pairs of test case and coverage are then used to train a neural network (5), which learns to predict the coverage for each test case. Being smooth and differentiable, the neural network can be used for computing _gradients_, the values of derivatives of the program w.r.t. its inputs. These indicate the direction and rate of fastest increase in the function value and can be used to flip specific edges in the bitmap from zero to one (6). Each gradient corresponds to one byte in the input. The locations with the highest gradient values are mutated (7) to propose new test cases (8) that should reach the targeted regions of the code. This idea is inspired by adversarial examples, more precisely FGSM (Krizhevsky et al., 2014), where a change in the input in the direction of the sign of the gradient is sufficient to change the model outcome.
MTFuzz (Nicolae et al., 2019) extends Neuzz with multitask learning (Nicolae et al., 2019): the neural network is trained against three types of code coverage instead of only edge coverage. _Context-sensitive coverage_(Krizhevsky et al., 2014; Srivastava et al., 2017) distinguishes between distinct caller locations for the same covered edge, while _approach-sensitive coverage_(Beng et al., 2019) introduces a third possible value in the coverage bitmap reflecting when an edge was nearly covered because the execution has reached a neighboring edge. The three types of coverage help learn a joint embedding that is used to determine interesting bytes for mutation in the test case. The bytes are ranked using a saliency score, which is computed as the sum of gradients for that byte in the learned embedding space. Each "hot byte" is mutated by trying out all possible values, without further relying on the gradients.
Figure 1. Neural program smoothing for fuzzing.
PreFuzz (Zhang et al., 2017) attempts to solve some limitations of Neuzz and MTFuzz by extending Neuzz in two ways. The program instrumentation is changed to include all neighboring edges of covered ones in the bitmap. This information is used to probabilistically choose which edge to target next for coverage, with the end goal of encouraging diversity in edge exploration. Additionally, the success of havoc mutations (Neuzz, 2017) is leveraged: after the standard Neuzz mutation, havoc is applied probabilistically to pre-defined segments of bytes in the test case, according to their gradient value.
## 3. Analyzing Neural Program Smoothing
In this section, we provide our main analysis of neural program smoothing, covering both the concepts behind NPS, as well as existing fuzzer implementations. We tackle three orthogonal perspectives: (i) conceptual or fundamental, (ii) implementation and usability, and (iii) experimental considerations.
### Conceptual Limitations
**(C1) Approximation errors of the neural network.** Being an empirical process, neural network training can suffer from errors introduced in the training process by, e.g., limited training data and training time, or sensitivity to hyperparameters. Even in the ideal case, _being a smooth approximation, the NPS model will always differ from the actual program exactly at the most interesting points_, i.e., discontinuities, branches, and jumps. This approximation error is intrinsic to a smoothing approach and, at the same time, what allows NPS methods to use gradients and numeric optimization towards producing new inputs.
**(C2) Capacity to reach targeted edges.** Arguably, the most salient research question to elucidate about neural program smoothing is whether the gradient-guided mutation can indeed reach the targeted edges. As NPS is based on multiple components (Figure 1), the overall performance of the fuzzer critically depends on the effectiveness of its individual components:
1. The prediction accuracy of the neural network (5);
2. The capacity of the gradient-based mutations (7) to achieve the expected new coverage on the target program.
The experiments we perform later in the paper show that the machine learning component as used by neural program smoothing has impaired performance. To the best of our knowledge, prior NPS studies have not assessed what the model was learning and whether it was reaching its objective.
**(C3) Incomplete coverage bitmaps.** Another central limitation of neural program smoothing that we uncover relates to the incompleteness of the coverage bitmaps that the neural network receives. All NPS fuzzers retrieve covered edges through afl-showing, which only reports the edge IDs that are reached. When the coverage information from all seeds is put together for the overall bitmap used for training the neural network, it thus only contains edges that were reached at least once by any of the seeds. As such, unseen edges are not part of the bitmap and cannot be explicitly targeted and discovered by the model. In practice, if the neural network does discover new edges, it is rather inadvertently due to randomness. While having access to only an incomplete coverage bitmap is a conceptual limitation, it can be addressed on an implementation level. It is sufficient to change the instrumentation of the program to include uncovered edges to overcome this issue. Among existing NPS fuzzers, PreFuzz is the only one that considers information about neighbors of reached edges in the coverage bitmap, albeit not motivated by the limitation we uncover. Their goal is rather to be able to choose the next edge to target in a probabilistic fashion, depending on the degree of coverage of each edge and its neighbors.
The fundamental limitations uncovered in this section, while some easier to solve than others, are what we see as main obstacle in the adoption of NPS-based fuzzing in practice. As will be confirmed in Section 5, the experiments are consistent with these limitations.
### Implementation and Usability Limitations
We now turn to practical aspects that make existing approaches to neural program smoothing inconvenient to use, such that an independent evaluation requires major effort and code rewriting.
**(11) Use of outdated components.** Existing implementations of neural program smoothing (Zhu et al., 2017; Wang et al., 2017; Zhang et al., 2017), along with HavocMAB (Zhang et al., 2017) are implemented as extensions of AFL instead of using the more recent, more performant AFL++ as base. Moreover, their dependency on outdated Python, TensorFlow and PyTorch versions impacts usability. For the purpose of experiments, we have patched the code and updated the dependencies of all these fuzzers, as even for the most recent ones, some of their used libraries were already not available at the time of their publication.
**(12) Difficulty in building targets.** Prior NPS studies provided the binaries used in their own research, ensuring reproducibility. However, for a fuzzer to be practical, it is advisable to rather provide instructions on how to build new programs for its use. This is especially important when the fuzzer uses custom target instrumentation. MTFuzz (Zhu et al., 2017), for instance, compiles a target program in five different ways due to the introduction of three additional types of instrumentation. For this reason, we exclude MTFuzz from our empirical study as not being practical for real-world fuzzing. Moreover, we argue that the three types of coverage used by MTFuzz are to a large extent redundant (conceptual limitation) and could be grouped into a unified coverage, thus reducing the build effort for this fuzzer.
**(13) Use of magic numbers.** The magic numbers programming antipattern (Mikolov et al., 2015) is frequently encountered in the implementations of neural program smoothing-based fuzzers. These values and other algorithmic changes are not mentioned in the original papers where each NPS fuzzer is introduced. It is thus difficult to establish whether the performance of each method is strictly linked to its proposed algorithm or rather to the implementation tweaks. E.g., the maximum number of mutation guiding gradients per seed is set to 500; this value is not a parameter of the algorithm presented in the paper.
Our findings above show that the effort to set up existing NPS fuzzers and build targets for them is significantly higher than for standard gray-box fuzzers, such as AFL and its variants, or libFuzzer.
### Evaluation Limitations
In this section, we highlight flaws and limitations of previous experimental evaluations of NPS fuzzers and HavocMAB, which have led to unrealistic performance claims.
**(E1) Experimental protocol.** The more recent NPS publications (Wang et al., 2017; Zhang et al., 2018) _lack of comparisons with recent gray-box fuzzers_, such as AFL++ and libFuzzer--fuzzers that were available and confirmed as state-of-the-art long before their publication. HavocMAB (Zhang et al., 2018) has included Neuzz and MTFuzz in their evaluation alongside AFL++. However, we find that they use the same binary target for both AFL and AFL++, instead of building the program separately for AFL++. AFL++ runs on AFL instrumented binaries, but not efficiently. Moreover, the size of the coverage bitmap is usually larger for AFL++ than with AFL instrumentation; hence, code coverage as measured by the fuzzers is not directly comparable. This makes the conclusions in the HavocMAB evaluation (Zhang et al., 2018) questionable.
**(E2) Fuzzer configuration for speed.** We note that prior studies benchmarking NPS methods compile their targets using afl-gcc, which results in slower targets and thus impacts fuzzing speed. The AFL++ documentation recommends using preferably afl-clang-fast or afl-clang-lto (Fang et al., 2018). Additionally, AFL-based fuzzers have multiple options for transferring fuzz data to the program. The most basic is to have AFL write test cases to file, and the target program executed with command line options to process the file as input. The more sophisticated and recommended _persistent_ mode uses a fuzzing harness that repeatedly fetches fuzz data from AFL via shared memory and executes the function with the test data as input without restarting the whole program. "_All professional fuzzing uses this mode_", according to the AFL++ manual (Beng et al., 2018). Depending on the target, the persistent mode can increase the throughput by 2-20\(\times\)(Fang et al., 2018). Previous neural smoothing papers seem to run all experiments by feeding inputs via files, which should considerably slow down all fuzzers. This is consistent with their results, where the more modern AFL++ consistently performs worse than AFL in the HavocMAB study (Zhang et al., 2018), and the targets are printed with command line arguments in the original Neuzz paper (Wang et al., 2018). We conjecture that this tips the scale in favor of ML-based fuzzers, which are themselves orders of magnitude slower than modern fuzzers (Zhang et al., 2018). This statement is validated experimentally in Section 5.7.
## 4. Implementing Neuzz++ and Mlruzz
In this section, we introduce _Neuzz++_, our implementation of neural program smoothing that aims to solve some limitations identified in Section 3, as well as the new experimental platform for evaluating ML-based fuzzers.
_Neuzz++._ We implement a variation of Neuzz as a custom mutator for AFL++, which we name Neuzz++ (see Figure 2). This allows our method to leverage most AFL++ features, like its standard mutations and power schedule. More importantly, it allows for machine learning-produced test cases and randomly mutated ones to evolve from each other. We choose AFL++ as base for our implementation for its state-of-the-art performance, thus addressing Issue I1. Being a custom mutator, Neuzz++ is modular, easy to build, and integrated with a default AFL++ installation.
In practice, Neuzz++ consists of two parts: the main AFL++ process with the custom mutator implemented in C, and a Python extension that is called for machine learning operations. The two processes communicate using named pipes. We set a minimum requirement of \(T\) test cases in the corpus for the custom mutator to run. These are used to train the neural network for the first time; the model is retrained at most every hour if at least ten new test cases have been added to the corpus1. This allows to refine the model over time with new coverage information from recent test cases. In practice, we use \(T=200\); this value is tuned experimentally and aims to strike the balance across all targets between fuzzing with machine learning as early as possible, while waiting for enough data to be available for model training. Intuitively, a larger dataset produces a better performing model. afl-shomap is used to extract the coverage bitmap. We introduce a coverage caching mechanism for model retraining which ensures that coverage is computed only for new test cases that were produced since last model training. Each time the C custom mutator is called by AFL++, it waits for the Python component to compute and send the gradients of the test case. Based on these, the mutations are computed by the C mutator and returned to AFL++. In contrast to Neuzz, the gradients are not precomputed per test case, they are not saved to disk, the neural network is kept in memory, and the gradients are computed only on demand. These optimizations minimize the time spent on ML-related computations, keeping more time for fuzzing.
Footnote 1: Neuzz and PreFuzz solve this issue by running AFL for the first hour of fuzzing, then use the collected data for model training (Figure 2).
The neural network is a multi-layer perceptron (MLP) with the same structure as Neuzz (one hidden layer, 4096 neurons). As shown in the PreFuzz paper (Zhang et al., 2018), we also found that different neural network architectures do not improve fuzzing performance. In contrast to NPS fuzzers, we keep 10% of the test cases as validation set for evaluating the performance of the model. We use the Adam optimizer (Kingma et al., 2014), a learning rate of \(10^{-4}\), and cosine decay with restarts.
It is easy to parallelize model training and the main AFL++ routine for improved fuzzing effectiveness when testing real targets. However, for experimental evaluation, we choose to have AFL++ wait for the neural network to train, similarly to previous implementations of neural program smoothing fuzzers. This allows for fair experimental comparison and computation resource allocation.
The original Neuzz implementation applies four different mutation patterns on each byte selected according to the highest ranking gradients: incrementing the byte value until 255, decrementing the byte value down to 0, inserting a randomly sized chunk at the byte location, and deleting a randomly sized chunk starting at the given byte location. We apply the same mutation pattern for Neuzz++.
Figure 2. Operation mode of previous NPS-guided fuzzers and our Neuzz++.
MLFuzz.MLFuzz serves as a benchmarking framework for building test targets, running fuzzing trials in an isolated environment, and analyzing the findings. Its main features are:
* Test targets from Google Fuzzer Test Suite (Fuzzer, 2017) are compiled with the recommended and most recent compiler of the appropriate fuzzer; the build scripts are made available (addressing Issue I2 and issue E2).
* Targets are compiled with AddressSanitizer (Kumar et al., 2019) to detect memory errors.
* Six fuzzers are currently included in MLFuzz: AFL v2.57b, AFL++ v3.15a, HavocMAB, Neuzz, PreFuzz and our Neuzz++.
* The implementation is containerized via Docker (Docker, 2017). Python dependency specification is handled via virtual environments and Poetry (Docker, 2018).
* Each fuzzing trial runs on one dedicated CPU and optionally one GPU for fuzzers that support it.
* All supported fuzzers have been modified to accept seeding of their random number generator for reproducible results.
* For all fuzzers, coverage is measured by replaying the corpus at the end of a run. We use binaries instrumented with AFL to ensure we do not disadvantage the AFL-based fuzzers, and af1-showmap from AFL++, since it has a larger bitmap with less hash collisions.
* Test cases are transmitted to fuzzers via shared memory, with the option to switch to slow transmission of test cases via the file system (addresses Issue E2).
## 5. Experiments
This section introduces our experiments and practical analysis, complementing the main findings from previous sections. After presenting our setup (Section 5.1), we assess the performance of the components of NPS-based fuzzers in Section 5.2. We compare our Neuzz++ to prior neural program smoothing fuzzers and standard gray-box fuzzers in an extensive benchmark in Section 5.3. Sections 5.4 to 5.6 explore the added benefit of machine learning to NPS fuzzers, while Section 5.7 sheds light on experimental protocol differences with previous NPS publications and their impact on fuzzing results. Finally, we report bugs found in Section 5.8.
### Experimental Setup
All experiments are performed on a server running Ubuntu 20.04 with four Nvidia Titan Xp GPUs. Our study includes the six fuzzers from MLFuzz: AFL and AFL++ as standard gray-box fuzzers, HavocMAB as recent fuzzer claiming state-of-the-art performance, and NPS fuzzers Neuzz, PreFuzz, and our own Neuzz++. We use the original implementation and parameters provided by the authors for all baselines, except when stated otherwise. We patch the code of Neuzz and PreFuzz to port them to Python 3.8.1, CUDA 11.5, TensorFlow 2.9.1 (Abadi et al., 2017) and PyTorch 1.4 (Krizhevsky et al., 2017), as the original implementations are based on outdated libraries that are not available anymore or incompatible with our hardware.
We choose Google Fuzzer Test Suite (Fuzzer, 2017) and FuzzBench (Krizhevsky et al., 2017) as standard, extensive benchmarks for our experimental evaluation. We make use of 23 targets, summarized in Table 1. These are selected for being accessible, having dependencies available on Ubuntu 20.04, and being non-trivial to cover through fuzz testing. Note that we only include targets from FuzzBench if they are not already included in Fuzzer Test Suite. All results are reported for 24 hours of fuzzing. We repeat each experiment 30 times to account for randomness, unless stated otherwise. Each standard gray-box fuzzer is bound to one CPU core, while NPS fuzzers are allotted one CPU and one GPU per trial. The main metrics used for evaluation are code coverage and number of bugs found. For code coverage, we use edge coverage as defined by the AFL family of fuzzers. However, we emphasize that AFL and AFL++ compute edge coverage differently. In order to avoid the measuring errors introduced when ignoring this aspect, we count coverage by replaying the corpus using af1-showmap from AFL++ on the same binary, independently of which fuzzer was used in the experiment. The setup we use fixes all experimental limitations we highlighted in Section 3.3 (Issues E1 and E2).
### Performance of Machine Learning Models
We now investigate the quality of coverage predictions by the neural network and gradient-based mutations, in relation to concerns about the fundamental principle of neural program smoothing (Section 3.1). We tackle the following questions:
* Can the neural network learn to predict edge coverage?
* Can gradient-based mutations reach targeted edges?
To this end, we propose quantitative and qualitative analyses of the performance of the neural network in neural program smoothing fuzzers. Without loss of generality, we investigate these based on Neuzz++ as a proxy for all neural program smoothing fuzzers
\begin{table}
\begin{tabular}{l l l l} \hline \hline Target & Format & Seedsa & LOCb \\ \hline \multicolumn{4}{l}{**Source: Fuzzer Test Suite**} \\ \hline boringsl-2016-02-12 & SSL private key & 107 & 102793 \\ freetype-2017 & TTF, OTF, WOFF & 2 & 95576c \\ guetelli-2017-3-30 & JPEG & 2 & 6045 \\ harfbuzz-1.3 & TTF, OTF, TTC & 58 & 21413 \\ json-2017-02-12 & JSON & 1 & 23328 \\ lcms-2017-03-21 & ICC profile & 1 & 33920 \\ libarchive-2017-01-04 & archive formats & 1 & 141563 \\ libjpeg-turbo-07-2017 & JPEG & 1 & 35922 \\ libpng-1.2.56 & PNG & 1 & 24621 \\ libxml2-v2.9.2 & XML & 0 & 203166 \\ openssl-1.0.2d & DER certificate & 0 & 262547 \\ perc2-10.00 & PERI. regex & 0 & 67333 \\ proj4-2017-08-14 & custom & 44 & 6156 \\ re2-2014-12-09 & custom & 0 & 21398 \\ sqlite-2016-11-14 & custom & 0 & 122271 \\ vorbis-2017-12-11 & OGG & 1 & 17584 \\ woff2-2016-05-06 & WOFF & 62 & 2948 \\ \hline \multicolumn{4}{l}{**Source: FuzzBench**} \\ \hline bloaty & ELF, Mach-O, etc. & 94 & 690642 \\ curl & comms. formats & 41 & 153882 \\ libpcap & PCAP & 1287 & 56663 \\ openh264 & H.264 & 174 & 97352 \\ stb & image formats & 467 & 71707 \\ zib & zlib compressed & 1 & 30860 \\ \hline \hline \end{tabular}
* Targets that do not have seeds use the default from Fuzzbench.
* Retrieved with cloc (Kumar et al., 2019).
\end{table}
Table 1. Target programs from Google Fuzzer Test Suite (Fuzzer, 2017) and FuzzBench (Krizhevsky et al., 2017).
included in our study. As all these methods use the same neural network architecture, loss function, method of training, etc., it is to be expected that their models will achieve the same performance when trained on the same dataset. The results of the analyses can be summarized as follows and are detailed subsequently:
* Table 2 quantifies the model performance for all targets in terms of standard machine learning metrics;
* Figure 3 provides a qualitative analysis of model predictions for a given target, opposing them to correct labels.
* Lastly, Figure 3 also assesses the capacity of the neural network to reach edges through gradient-based mutations.
_ML performance metrics._ To assess one factor of difficulty of the machine learning task, we evaluate dataset imbalance for the training corpus. This measures the percentage of the positive class (covered edges, in our case the minority) in the coverage bitmap of the training set. Recall that the bitmap is produced by af1-showmap and accounts for the coverage obtained by the corpus before training; the coverage was not necessarily achieved based on a neural network, but rather by AFL++ mutations. Note that this value is averaged across test cases and edges; rare edges might have much smaller coverage ratios, resulting in more difficulty in training an accurate model for those edges. When facing class imbalance, the model tends to prefer the majority class, thus making wrong predictions. For this reason, the performance of the neural network is assessed using precision, recall, F1-score, and precision-recall (PR) trade-off as performance metrics for the neural network. Accuracy is also computed for completeness, but keep in mind that this metric is misleading for imbalanced datasets2. We measure the area-under-the-curve (AUC) of the PR metric to evaluate all the operational points of the neural network. Similar to accuracy, PR-AUC saturates at one, but is more sensitive to wrong predictions in the positive class. The learning setup of neural program smoothing is a multi-label binary classification task, i.e., for each
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline Target & \%covered edges & Acc & Prec & Recall & F1 & PR-AUC \\ \hline bloaty & 17.1\% & 0.53 & 0.17 & 0.18 & 0.17 & 0.15 \\ boringssl & 19.3\% & 0.90 & 0.18 & 0.17 & 0.17 & 0.20 \\ curl & 15.2\% & 0.89 & 0.15 & 0.15 & 0.15 & 0.23 \\ freetype2 & 8.6\% & 0.89 & 0.09 & 0.09 & 0.09 & 0.10 \\ guetli & 18.5\% & 0.84 & 0.18 & 0.18 & 0.18 & 0.19 \\ hardbuz & 6.9\% & 0.93 & 0.07 & 0.07 & 0.07 & 0.07 \\ json & 12.7\% & 0.88 & 0.11 & 0.08 & 0.09 & 0.10 \\ lcms & 20.9\% & 0.84 & 0.19 & 0.19 & 0.19 & 0.21 \\ libarchive & 6.9\% & 0.94 & 0.07 & 0.06 & 0.06 & 0.07 \\ libjpeg & 17.8\% & 0.84 & 0.17 & 0.09 & 0.17 & 0.18 \\ libpcap & 6.4\% & 0.92 & 0.06 & 0.06 & 0.06 & 0.07 \\ libpng & 28.8\% & 0.86 & 0.28 & 0.27 & 0.27 & 0.29 \\ libxml2 & 10.5\% & 0.92 & 0.10 & 0.09 & 0.09 & 0.11 \\ openh264 & 21.4\% & 0.81 & 0.22 & 0.30 & 0.21 & 0.22 \\ openssl & 31.2\% & 0.79 & 0.30 & 0.30 & 0.29 & 0.31 \\ perc2 & 4.3\% & 0.96 & 0.04 & 0.03 & 0.03 & 0.04 \\ proj4 & 8.2\% & 0.95 & 0.08 & 0.07 & 0.07 & 0.08 \\ re2 & 16.2\% & 0.87 & 0.15 & 0.13 & 0.13 & 0.16 \\ sqlite & 16.3\% & 0.91 & 0.12 & 0.12 & 0.12 & 0.17 \\ stb & 6.0\% & 0.92 & 0.06 & 0.05 & 0.05 & 0.06 \\ vorbis & 29.6\% & 0.81 & 0.30 & 0.30 & 0.30 & 0.30 \\ woff2 & 22.8\% & 0.85 & 0.22 & 0.22 & 0.21 & 0.13 \\ zlib & 16.1\% & 0.85 & 0.14 & 0.10 & 0.11 & 0.16 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Dataset properties and neural network evaluation.
Figure 3. Predicted and actual edge coverage on _libpng_ for the entire corpus. Top: ML-predicted coverage (pink) is trivial and almost constant over test cases. When each edge is targeted by mutations, predicted coverage (orange) increases for certain edges, but many code edges remain unattainable. Bottom: Coverage extracted with af1-showmap shows that all edges present have been covered at least once by the corpus.
test case, multiple binary predictions are made, one per edge; in consequence, the metrics are computed for each edge in the bitmap independently, then averaged over all edges, and finally averaged over trial repetitions.
Table 2 reports the model performance metrics, along with the percentage of the positive class in the dataset as imbalance metric. All model metrics are computed on a 10% holdout set of test cases that were not used for training. As Neuzz++ retrains the model multiple times, all measurements are performed on the last trained neural network using the state of the corpus at that time. The precision, recall, F1-score, and PR-AUC values in Table 2 indicate that the neural network has low performance. These metrics are particularly low when the class imbalance is stronger, i.e., for small values of "%covered edges". The dataset imbalance is quite extreme for seven targets, where the positive class represents less than 10% of the dataset, making predictions particularly difficult.
To provide an intuition into what the neural network learns, we design a qualitative evaluation of its predicted coverage. This experiment uses the target _libpng_ and the test cases generated in a 24-hours run of Neuzz++. Figure 3 shows two coverage plots for this target for the entire corpus, where each "column" in the plot represents one test case, while each "row" is a program edge. We compare the coverage predicted by a trained ML model for the same test cases and edges (Figure 3 top) to the true coverage extracted with afl-showm (bottom). The bottom plot is the coverage bitmap extracted with afl-showm for the corpus and used for model training by Neuzz, PreFuzz, and Neuzz++. A reduction (deduplication) operation is applied to it, which for _libpng_ reduces the number of edges from 900 to the 293 present in the plot; this operation also explains any visual artifacts present in the image, as the edges are reordered. The pink areas of the two plots differ significantly, with the model predictions being almost constant over all test cases: the model only predicts trivial coverage and fails to capture rare edges. While this is a consequence of the difficulty of the machine learning tasks (small dataset, class imbalance, too few samples w.r.t. the size of the test cases and bitmaps, see Table 2), it results in large approximation errors in the neural network, as outlined in Issue C1. Moreover, recall that Neuzz, PreFuzz and Neuzz++ use the same ML model type and structure, with minor differences in the training procedure and similar model performance. Our findings thus extend to all NPS methods.
Finally, we investigate the effectiveness of gradient-based mutations as essential component of NPS fuzzers. In the same setup on _libpng_ from the previous section, we apply Neuzz++ mutations to the corpus generated by a 24-hours fuzzing run as follows. For each edge in the bitmap, we consider the case when it is explicitly targeted and generate all mutations with a maximum number of iterations in the mutation strategy. Figure 3 (top) plots the predicted coverage for each test case and edge before the mutations, as well as the increment of coverage after mutation. Each edge (row) is considered covered by one test case (column) if at least one of the few thousand mutations generated to target it reaches the code location. The results represent coverage estimated by the ML model, not run on the program. However, the coverage the model predicts is an optimistic estimate of the one actually achieved on the target, as the model dictated the mutations. Note that the mutations are generated in the same way for Neuzz, PreFuzz and Neuzz++; our analysis thus applies to all methods and targets.
Figure 3 (top) indicates that some locations are more readily reachable through mutations. The harder to reach edges overall match the rarer edges of the corpus, as measured by afl-showmap in the bottom plot. Most importantly, _none of the edges targeted or covered by the mutations in the top plot represent new coverage_. Recall that, by NPS methods' design, a code edge is only present in the bitmap only if it has already been covered by the initial corpus used for training (Issue C3). This becomes evident in the bottom plot of Figure 3: all edges have been covered by at least one test case. As will be shown later, this fundamental flaw of NPS methods translates to a limited practical capacity of reaching new coverage.
_The model predicts trivial edge coverage (Issue C1), and gradient mutations cannot target new edges (Issue C3)._
### Comparing Code Coverage
We present the main experiment comparing the achieved code coverage of available neural program smoothing approaches to AFI, AFL++ and the recent HavocMAB in Table 3 (average coverage) and Figure 4 (coverage over time). This experiment alone requires a total computation time of over 11 CPU years and 5.5 GPU years.
Overall, AFL++ obtains the best performance on ten targets, followed by HavocMAB with eight targets, and Neuzz++ on par with AFI, winning two targets each. In view of AFL++ performance w.r.t. AFL, it is clear that not including AFL++ as a baseline in all prior neural program smoothing works leads to overly optimistic conclusions about their capacities. After AFL++, HavocMAB is the second most performant fuzzer in terms of code coverage. However, we find that it does not reach the expected ranking advertised in the HavocMAB paper (HavocMAB, 2018).
We observe that Neuzz and PreFuzz are never in the top two fuzzers. Moreover, although they were designed to improve AFL performance, their coverage is in most cases lower than that of AFL. AFL wins on 20 out of 23 targets over Neuzz, and 18 out of 23 over PreFuzz. PreFuzz outperforms Neuzz on most targets, however this difference is significant only on six targets (see confidence intervals in Figure 4). This finding is also at odds with original PreFuzz results (HavocMAB, 2018), where the performance gap is significantly wider. Section 5.7 is dedicated to further explaining the difference in performance with the initial papers. Neuzz++ obtains higher coverage than Neuzz and PreFuzz on 21 programs, proving that our improvements over these methods are effective.
Targets _libarchive, libxml2, proj4_, and _woff2_ exhibit the most variability among fuzzers. Neuzz and PreFuzz exhibit large standard deviation on _woff2_, where coverage varies depending if the fuzzers reach plateau or not. For the other targets, it seems AFL-based fuzzers do not perform as well as AFL++-based ones.
_Overall, AFL++ achieves the highest code coverage. Among NPS fuzzers, Neuzz++ achieves the highest code coverage._
### Code Coverage from Machine Learning
After presenting total coverage for 24-hour runs in Table 3, we now measure how much of the total coverage can be attributed to the machine learning component for each NPS fuzzer. On one hand, the goal is to discount the coverage produced strictly by AFL in the first hour of Neuzz and PreFuzz runs (recall that they use AFL for data collection, see Figure 2) and only measure the NPS fuzzers' contribution. On the other hand, we wish to do the same for Neuzz++, and separate its contribution from that of the base fuzzer AFL++. As Neuzz++ is a custom mutator for AFL++, its seeds usually alternate with regular AFL++ seeds. To this end, we measure edge coverage by corpus replaying, this time only taking into account the seeds obtained by Neuzz, PreFuzz and Neuzz++, respectively. For Neuzz and PreFuzz, this is equivalent to excluding the first hour of coverage, as done by the original authors. In practice, this will include ML-based mutations, but also other hard-coded mutations that the methods apply, such as havoc in the case of PreFuzz. Table 4 summarizes the comparison of edge coverage obtained by the ML components of Neuzz, PreFuzz, and Neuzz++. Program names are aligned with Table 3.
The second analysis studies whether NPS fuzzers explore code areas that are harder to reach by standard fuzzers. In that case, neural program smoothing fuzzers could be used in an ensemble of diverse fuzzers, opening the path for all fuzzers to rare parts of the code (Fang et al., 2018). To measure the rarity of edges reached by Neuzz++, we compare the edge IDs that Neuzz++ and AFL++ reach on each program, all trials joint. The edge IDs are obtained by replaying all the test cases with afl-showmap.
We summarize the results in Table 6 as follows: Neuzz++ (denoted N+) reveals less than 0.5% additional edges that AFL++ (denoted A+) in 16 out of 23 targets. Neuzz++ does not find any such exclusive edges for eight programs; it is most successful on _lcms_, with 8.2% exclusive edges. On the other hand, AFL++ finds up to 16.4% exclusive edges, lacking exclusive edges on only two programs (_json_ and _sqlite_). We can therefore conclude that NPS-guided fuzzers explore essentially the same code areas as traditional fuzzers.
### NPS-based Fuzzing without GPUs
Due to their increased performance for linear algebra and data throughput, GPUs are the _de facto_ standard for machine learning. All NPS methods studied in this paper leverage GPU access to train machine learning models and compute gradients for test case mutations. In practice, this means that they use more computational resources than state-of-the-art gray-box fuzzers, and that practitioners are required to invest in additional hardware. In this section, we wish to assess the performance of NPS methods in the absence of GPUs. Model training with only CPU access should be slower,
Figure 4. Average edge coverage over time with 95% confidence interval.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Target & \%ML seeds & \%MLcov+ & \%derived \\ \hline bloaty & 4.72\% & 28.3\% & 8.78\% \\ boringssl & 27.7\% & 3.3\% & 27.5\% \\ curl & 18.6\% & 26.1\% & 33.1\% \\ freetype2 & 2.2\% & 31.9\% & 3.8\% \\ guetzli & 9.9\% & 9.8\% & 13.8\% \\ harfuzz & 6.6\% & 30.2\% & 15.2\% \\ json & 13.7\% & 37.3\% & 25.4\% \\ lcms & 1.6\% & 57.1\% & 1.1\% \\ libarchive & 18.3\% & 30.2\% & 34.9\% \\ libjpeg & 11.8\% & 10.7\% & 15.9\% \\ libpcap & 13.8\% & 40.0\% & 20.8\% \\ libpng & 19.6\% & 13.8\% & 41.0\% \\ libxml2 & 15.1\% & 23.9\% & 30.1\% \\ openh264 & 10.2\% & 8.0\% & 8.5\% \\ openssl & 30.6\% & 5.6\% & 28.7\% \\ pcre2 & 18.3\% & 17.4\% & 29.88\% \\ proj4 & 5.5\% & 49.7\% & 7.4\% \\ re2 & 23.3\% & 22.8\% & 34.7\% \\ sqlite & 8.1\% & 20.2\% & 6.2\% \\ stb & 14.8\% & 15.3\% & 19.9\% \\ vorbis & 6.0\% & 8.3\% & 7.9\% \\ woff2 & 3.8\% & 19.8\% & 5.2\% \\ zlib & 16.9\% & 20.7\% & 18.1\% \\ \hline \hline \end{tabular}
\end{table}
Table 5. Statistics for ML-generated test cases of Neuzz++. “%ML seeds” and “%derived” are computed over the total size of the corpus. “%MLcov+” is relative to “%ML seeds”.
but it should not impact the performance of the trained model. As such, any loss in fuzzing performance comes from spending more time training and less fuzzing. For this small experiment, we select four targets that operate on a varied range of input formats for diversity. We perform ten trials of all NPS fuzzers with and without GPU access (Table 7).
## 6. Benchmarking ML-based FUZZers
Fuzzer evaluation is an open research topic abundantly studied in recent works (Fuzzer et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). A common guideline is that each fuzzer must be tested on multiple programs, using multiple repetitions to account for randomness. The recommended number of repetitions revolves around 10-20 trials. Besides the average performance, indicators of variability (i.e., confidence intervals, statistical tests) are necessary to assess the significance of the results. The main goal of fuzzers is to find bugs, which suggests that unique bugs found in fixed time should be the evaluation metric. However, since bugs are rather rare, the performance of fuzzers is often measured in code coverage over time. This may be justified by observations that more code coverage correlates with more bugs found (Fuzzer et al., 2017). To complement these principles, we propose the following practices when evaluating novel machine learning-based fuzzing methods:
1. **Analyze each new component in the fuzzing loop**. Both performance evaluations and ablation studies of ML models are critical. Metrics specific to the task solved should be used (e.g., accuracy, or precision and recall for classification, mean absolute error or mean squared error for regression, etc.). These complement the view on the overall system performance, i.e., coverage or bugs found in the case of fuzzing. ML evaluation should employ a validation set distinct from the training data to avoid an overly optimistic estimates (Zhu et al., 2018).
2. **Use state-of-the-art fuzzers and configurations as baselines**. Lacking strong baselines prevents one from claiming novel state-of-the-art accomplishments in terms of code coverage and bugs found. All fuzzers in an experiment should be configured for performance (e.g., appropriate compiler, compilation options, harness, input feeding mode). We also recommend introducing new scientific or technical contributions based on recent fuzzers and evaluation platforms, as opposed to their older counterparts.
3. **Use comparable metrics for fuzzing performance.** As not all fuzzers measure the same type of coverage, we encourage the use of one common evaluation metric between multiple fuzzers. In practice, this is easiest done by replaying the corpus at the end of a fuzzing trial, as implemented by FuzzBench (Fuzzer et al., 2017; Wang et al., 2018) and MLFuzz.
4. **Repeat trials often enough to account for variance.** We propose to use 30 trials for fuzzing evaluation, resulting in tight confidence intervals. This sample size is commonly used in statistics and deemed sufficient for the central limit theorem (Fuzzer et al., 2017) to hold. As shown in Figure 4, ML-based fuzzers can have higher coverage variability than gray-box fuzzers, thus requiring more trials for stable baselining.
5. **Ensure reproducible results by fixing and serializing parameters.** While it is difficult to control all sources of randomness when training ML models on GPUs, it remains a good practice in both machine learning and software testing to control possible sources of randomness by seeding random number generators and reusing the same seeds. Experimental configurations and, in the case of ML, hyperparameters should be documented for reproducibility.
6. **Ensure usability of proposed fuzzers.** It should be possible to run a newly proposed fuzzer on programs outside the original publication study. Providing a containerized environment can sustainably decrease setup efforts. We also support integration of new fuzzers with existing benchmarking platforms, such as FuzzBench and now MLFuzz.
## 7. Conclusion and Consequences
Neural program smoothing for fuzzing neither reaches its advertised performance, nor does it surpass older fuzzing techniques that are still state-of-the-art. In our in-depth analysis of NPS fuzzers, we analyzed conceptual limitations of previously published approaches, as well as implementation and evaluation issues. Our comprehensive benchmark showed that NPS-guided fuzzers were by far unable to reach their stated performance. Addressing the implementation issues did not suffice to outperform state-of-the-art gray-box fuzzers. The reason for the limited fuzzing performance lies in the difficulty of the machine learning task, which yields trivial models on the data available during fuzzing.
To guide future fuzzing research and practical validation, we developed improved experimental guidelines targeting fuzzing with machine learning. Our MLFuzz framework for ML-based fuzzers includes patched and containerized versions of the investigated fuzzers to help with additional benchmarking. We encourage researchers to perform ablation studies and provide deeper insights into the components they introduce in fuzzing.
While we highlight fundamental limitations of neural program smoothing, whether and how much this technique can enhance fuzzing remains an open topic for future research. We hope that this work contributes to fair and comprehensive evaluations of future fuzzers, be they ML-based or not.
## 8. Data Availability
The open-source implementation of Neuzz++ and MLFuzz, the evaluation setup, and raw results are available at
[https://github.com/boschresearch/mlfuzz](https://github.com/boschresearch/mlfuzz)
[https://github.com/boschresearch/neuzzplusplus](https://github.com/boschresearch/neuzzplusplus).
## Acknowledgements
Thanks are due to Josselin Feist and the anonymous reviewers for valuable discussions that have led to paper improvements. This work was supported by the German Federal Ministry of Education and Research (BMBF, project CPSec - 16KIS1565 and 16KIS1564K).
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline Target & AFL & AFL++ & HavecMah & Neuzz & PreFuzz & **Neuzz**\(\leftrightarrow\) \\ \hline blady & 1 & 1 & 2 & 0 & 0 & 1 \\ guetzli & 8 & 264 & 5 & 0 & 0 & 170 \\ harbuzz & 0 & 355 & 1 & 0 & 0 & 12 \\ json & 20 & 11 & **22** & 18 & 16 & 10 \\ lems & 0 & **16** & 0 & 0 & 0 & 8 \\ libarchive & 0 & **1** & 0 & 0 & 0 & 0 \\ libxml2 & 0 & **648** & 1 & 0 & 0 & 289 \\ opensal & 138 & **1324** & 409 & 37 & 40 & 721 \\ perze & 87 & **4174** & 262 & 40 & 35 & 1371 \\ rez & 0 & **172** & 1 & 0 & 0 & 2 \\ vorbis & 0 & 2 & 1 & 0 & 0 & 0 \\ woff2 & 20 & 361 & **671** & 1 & 1 & 172 \\ \hline \hline \end{tabular}
\end{table}
Table 9. Bugs found after stack trace deduplication. |
2305.19826 | Hidden covalent insulator and spin excitations in SrRu$_2$O$_6$ | The density functional plus dynamical mean-field theory is used to study the
spin excitation spectra of SrRu$_2$O$_6$. A good quantitative agreement with
experimental spin excitation spectra is found. Depending on the size of the
Hund's coupling $J_H$ the systems chooses either Mott insulator or covalent
insulator state when magnetic ordering is not allowed. We find that the nature
of the paramagnetic state has negligible influence on the charge and spin
excitation spectra. We find that antiferromagnetic correlations hide the
covalent insulator state for realistic choices of the interaction parameters. | D. Csontosová, J. Chaloupka, H. Shinaoka, A. Hariki, J. Kuneš | 2023-05-31T13:07:35Z | http://arxiv.org/abs/2305.19826v2 | # Hidden covalent insulator and spin excitations in SrRu\({}_{2}\)O\({}_{6}\)
###### Abstract
The density functional plus dynamical mean-field theory is used to study the spin excitation spectra of SrRu\({}_{2}\)O\({}_{6}\). A good quantitative agreement with experimental spin excitation spectra is found. Depending on the size of the Hund's coupling \(J_{\rm H}\) the systems chooses either Mott insulator or covalent insulator state when magnetic ordering is not allowed. We find that the nature of the paramagnetic state has negligible influence on the charge and spin excitation spectra. We find that antiferromagnetic correlations hide the covalent insulator state for realistic choices of the interaction parameters.
Competition between kinetic and interaction energy is the corner stone of the correlated electrons physics. In the paradigmatic bandwidth control scenario of Hubbard model at half filling, increasing the interaction-to-bandwidth ratio suppresses the charge fluctuations and eventually drives the system to a Mott insulator (MI) state [1]. Real materials provide variations on this theme [2; 3], but also alternative mechanisms of correlation driven metal-insulator transition (MIT) such as site-selective Mott transition [4], spin-state crossover [5; 6], Kondo insulator [7], or gapping the ligand bands [8] to name a few. Often the paramagnetic (PM) MIT is hidden by a magnetic long-range order, which raises the question how much about the nature of PM phase can be learned from the properties of the ordered phase. The studies of single-band Hubbard model [9; 10] found rather subtle differences in anti-ferromagnetic (AFM) phase on the two sides of the Mott transition, which can be difficult or even impossible to identify in multi-orbital setting of real materials.
A weakly correlated state does not have to be metallic in order to exhibit charge fluctuations. A covalent insulator (CI) [11], with a gap between bonding and anti-bonding states does as well. As pointed out by Mazin _et al._[12] CI can be realized in layered transition metal oxides with honeycomb lattice structure such as Na\({}_{2}\)IrO\({}_{3}\), \(\alpha\)-RuCl\({}_{3}\), Li\({}_{2}\)RuO\({}_{3}\) or SrRu\({}_{2}\)O\({}_{6}\). The pattern of dominant hopping amplitudes, which traps the \(t_{2g}\) electrons in the hexagonal structural units, gives rise to molecular orbitals clearly visible in the electronic spectra. At half filling the Fermi level falls into the band gap between the molecular peaks [13], which stabilizes the CI state. On the other hand, the tendency to form a high-spin MI is maximal also at half filling [14], which leads to a competition without an _a priori_ winner.
This scenario is realized in SrRu\({}_{2}\)O\({}_{6}\) with nominally \(t_{2g}^{3}\) configuration. An antiferromagnetic insulator with high Neel temperature \(T_{\rm N}\) of 563 K [16], it does not exhibit the Curie-Weiss susceptibility in the PM phase. Instead, the susceptibility increases up to the highest reported temperature of about 730 K [17]. Classification of SrRu\({}_{2}\)O\({}_{6}\) based on numerical studies has been controversial. Streltsov _at al._[13] performed density functional plus dynamical mean-field theory (DFT+DMFT) calculations for Hund's coupling \(J_{\rm H}=0.3\) eV. Pointing out the discrepancy between the theoretical ionic moment of 3 \(\mu_{\rm B}\), a value essentially reproduced by their DFT+DMFT, and the observed ordered moment of 1.4 \(\mu_{\rm B}\) they argued that the electronic structure of SrRu\({}_{2}\)O\({}_{6}\) is dominated by molecular orbitals. Hariki _et al._[18] using a similar DFT+DMFT approach found a crossover between CI and MI in the PM phase for \(J_{\rm H}\) between \(0.16-0.19\) eV, depending on temperature. They also found that in the AFM phase the size of the ordered moment is essentially the same on both sides of the CI/MI crossover and agrees well with experimental as well as the DFT value, when the overlaps of Wannier orbitals are properly accounted for. The uncertainty in the value of the Hund's exchange \(J_{\rm H}\) thus left the ques
Figure 1: The unit cell of SrRu\({}_{2}\)O\({}_{6}\): Ru (blue), O (red) and Sr (green) atoms, visualized using VESTA3 [15]. The arrows mark the local orbital coordinates. Path in the reciprocal space used for plotting magnon dispersions.
tion of electronic structure of SrRu\({}_{2}\)O\({}_{6}\) open.
Using resonant inelastic x-ray scattering (RIXS) to map out the magnon dispersion Suzuki _et al._[19] concluded that SrRu\({}_{2}\)O\({}_{6}\) is a Mott insulator because the magnon spectrum can be well described by \(S=3/2\) Heisenberg model with parameters obtained by strong-coupling expansion with first principles hopping parameters. They pointed out the difference between a large paramagnetic Neel temperature \(\Theta\), proportional to the inter-atomic exchange \(J\) and reflected in the magnon bandwidth, and the smaller ordering temperature \(T_{N}\), determined by the spin gap. They argued that the observed absence of Curie-Weiss behavior above \(T_{N}\) is consistent with the behavior of 2D Heisenberg model, for which it is expected first for \(T>\Theta\).
We compute the spin excitation spectra [20; 21] using DFT+DMFT [22]. We pursue two objectives (i) apply the DMFT approach to dynamical susceptibilities based of Bethe-Salpeter equation (BSE) [23; 24] to an ordered state of a real material and assess its quantitative accuracy, (ii) analyze the connection between the character of the PM phase, MI vs CI, and the properties of the AFM phase. The DMFT BSE approach has been successfully applied to antiferromagnetic magnons in up to 3-orbital model [24]. Here we focus on quantitative comparison with experiment, the role of spin-orbit coupling (SOC), the relationship between single-ion anisotropy and the spin gap, and other spin excitations beyond magnon. In order to address (ii), we vary \(J_{\rm H}\) across the CI-MI crossover.
_The method._ We study the '\(t_{2g}\)-only' model of Ref. [18] with Slater-Kanamori interaction obtained by wannierization [25; 26] from density functional calculation [27]. Unlike in Ref. [18] we use the basis of \(xy\), \(yz\) and \(xz\) Wannier orbitals in the coordinates shown in Fig. 1, see Supplemental Material (SM) for details [28]. In order to reduce the computational effort, the calculations were done for C-type (2 atoms) rather than the experimental G-type (4 atoms) structure. This approach is justified by the miniscule inter-layer coupling [17]. Throughout this study we keep the interaction parameter \(U=2.7\) eV fixed and vary \(J_{\rm H}=0.16-0.22\) eV as well as temperature. In PM calculation we enforce the spin symmetry of the self-energy in each DMFT iteration.
The DMFT [29] calculations were performed with a multiorbital implementation [30] of the continuous-time hybridization expansion Monte Carlo method [31] based on ALPS core libraries [32]. Some of the DMFT calculations were benchmarked against results obtained with DCore [33]. The BSE with local particle-hole irreducible vertex [34] was solved for the lowest 10 bosonic Matsubara frequencies in the Legendre representation [35]. The desired dynamical susceptibilities \(\langle O_{-\mathbf{q}}O_{\mathbf{q}}\rangle_{\omega}\) were obtained by sandwiching the general 2-particle susceptibility with the corresponding vertices followed by analytic continuation [36; 37], see SM [28] for details. The reciprocal space operators are related to local observable by the Fourier transform
\[O_{\mathbf{q}}=\sum_{\mathbf{R},s}e^{-i\mathbf{q}\cdot(\mathbf{R}+\mathbf{r}_ {s})}O_{\mathbf{R}s}\quad\mathbf{r}_{s}=\begin{cases}\left(\frac{2}{3},\frac{ 1}{3},0\right)\,s\!=\!\mathrm{A}\\ \left(\frac{1}{3},\frac{2}{3},0\right)\,s\!=\!\mathrm{B},\end{cases} \tag{1}\]
where the index \(s\) refers to the two Ru sites in the unit cell. In the following we study the transverse spin susceptibility with \(O\equiv S^{x}\), and \(S=3/2\to 1/2\) excitations, for which we choose a representative operator \(O\equiv X\) below, generating \(\Delta S^{z}=\pm 1\) transitions between \(S=3/2\) and \(S=1/2\) manifolds
\[S^{x}_{\mathbf{R}s} =\sum_{\alpha=1}^{3}\mathrm{d}^{\dagger}_{\mathbf{R}s\alpha\uparrow }\,\mathrm{d}_{\mathbf{R}s\alpha\downarrow}\!+\!H.c. \tag{2}\] \[X_{\mathbf{R}s} =\left(\mathrm{d}^{\dagger}_{\mathbf{R}s1\uparrow}\,\mathrm{d}_{ \mathbf{R}s1\downarrow}-\mathrm{d}^{\dagger}_{\mathbf{R}s2\uparrow}\, \mathrm{d}_{\mathbf{R}s2\downarrow}\right)+H.c. \tag{3}\]
Figure 2: Imaginary part of the dynamical susceptibility along the \(\Gamma^{\prime}\) - \(\Gamma\) - \(M\) path shown in Fig. 1 for \(J_{\rm H}=0.16\) eV at \(T=464\) K. Grey dots denote the maxima of the corresponding RIXS features [19]. Top (linear color scale): \(\langle X_{-\mathbf{q}}X_{\mathbf{q}}\rangle_{\omega}\) representing the \(S=3/2\to 1/2\) transitions. Bottom (logarithmic color scale): \(\langle S^{x}_{-\mathbf{q}}S^{x}_{\mathbf{q}}\rangle_{\omega}\) corresponding to magnon.
The operator \(X\) is chosen to be representative of a set of closely spaced transitions, see SM [28].
_Magnon dispersion._ The DMFT calculations lead to AFM with out of plane orientation of the local moment for temperatures below 1500 K. Since the magnetism of SrRu\({}_{2}\)O\({}_{6}\) is essentially 2D [17; 19] this overestimation by DMFT is expected. The DMFT does not obey the Mermin-Wagner theorem and the calculated ordering temperature represents \(\Theta\) rather than \(T_{N}\). This does not mean that the DMFT AFM solution should not be able to capture the ordered state of the real material. Fig. 2 shows a comparison of the dynamical susceptibilities \(\langle X_{-\mathbf{q}}X_{\mathbf{q}}\rangle_{\omega}\) and \(\langle S_{-\mathbf{q}}^{x}S_{\mathbf{q}}^{x}\rangle_{\omega}\) calculated in the AFM phase at 464 K to the experimental RIXS data [19]. The magnetic moments at this temperature are essentially saturated [28; 18] and thus no significant change in the computed spectra is expected upon further cooling. Rather than computing the full RIXS spectra, calculation of which would require evaluation of transition amplitudes [38; 39] with the possibility of multi-particle excitations [40; 41] and is not possible with the present methods, we compare the dispersions of specific spectral features. We find a very good match of the magnon dispersion including the bandwidth, the spin gap and the distribution of spectral weight. The magnon bandwidth of 183 meV corresponds to the effective nearest-neighbor exchange \(JS=61\,\mathrm{meV}\) between \(S=3/2\) local moments.
A straightforward strong-coupling calculation with the same parameter setup yields a remarkably similar value \(JS\approx 66\,\mathrm{meV}\)[28], essentially unaffected by SOC. However, by inspecting the exact solution of our Hubbard model on a single bond [28], we found the spin \(S=3/2\) picture to be significantly disturbed by a large involvement of higher multiplet states at energies \(\gtrsim 3J_{\mathrm{H}}\)[28]. In such situation, the DMFT approach covering the entire spectrum of multiplet states is highly advantageous.
The spin gap of approximately 45 meV is related to the single-ion anisotropy \(\Delta_{\mathrm{SIA}}=E_{\pm 1/2}-E_{\pm 3/2}=6.6\,\mathrm{meV}\), defined as the difference between the atomic states belonging to the \(S=3/2\) multiplet [42]. The strong-coupling evaluation of SIA suggests that the above ionic value is actually strongly renormalized by exchange processes [28]. Within the linear spin-wave theory of Heisenberg antiferromagnet, the large gap is easily explained even for small SIA, as it is given by \(S\sqrt{6J\Delta_{\mathrm{SIA}}}\)[19]. Nevertheless, it is not self-evident that the present numerical approach must capture it accurately. We have also carefully checked the out-of-plane orientation of the ordered moments, see SM [28], and verified its origin in SOC by performing calculations with SU(2)-symmetric Hamiltonian without SOC. As expected we find two gapless linear Goldstone modes with divergent spectral weights in this case, see Fig. 3.
The experimental RIXS spectra [19] exhibit a prominent low-energy feature associated with \(S=3/2\to 1/2\) transitions. Our calculations, Fig. 2, reproduce the position of this feature fairly well, although the SOC induced mixing with the low energy magnon limits the resolution
Figure 4: Spectral functions and corresponding imaginary parts of self-energies on the real axis in the constrained PM solution (left half) and AFM solution (right half). The calculations were performed for \(T=290\,\mathrm{K}\). Red and blue color in the figures with AFM self-energies distinguish between spin up and spin down component. The grey lines in the spectral function with \(J_{\mathrm{H}}=0.16\,\mathrm{eV}\) show DFT band structure squeezed by factor 2.2.
Figure 5: Uniform susceptibility for \(J_{\mathrm{H}}=0.16\,\mathrm{eV}\) and \(J_{\mathrm{H}}=0.19\,\mathrm{eV}\) in the PM state. The dashed line shows the Curie-Weis susceptibility \(\chi\propto(T+\Theta)^{-1}\) with \(\Theta=1480\,\mathrm{K}\). Magnitude of the calculated \(\chi(T)\) is about 30% smaller than the experimental one [17]. Inset: a cartoon picture of different temperature scales in SrRu\({}_{2}\)O\({}_{6}\).
of the higher energy structures.
_Mott vs covalent insulator._ In calculations performed in the PM state, the authors of Ref. [18] observed a crossover between the low-temperature CI and high temperature MI at a scale \(T^{\star}\), which strongly depends on \(J_{\rm H}\). For \(J_{\rm H}=0.16\) eV the scale \(T^{\star}\) lies in the 600-800 K range, while for \(J_{\rm H}\gtrsim 0.19\) eV only MI was observed. SrRu\({}_{2}\)O\({}_{6}\) exists in the PM phase below 800 K, however, since DMFT exaggerates its ordering temperature [43]. we enforce the PM solution by constraint, in order to study it at lower temperatures.
The different temperature scales discussed below are summarized in the inset of Fig. 5. The paramagnetic Neel temperature \(\Theta\), which we identify with the DMFT ordering temperature, is estimated from the present study [28] and 18. The CI/MI crossover temperature \(T^{\star}\) is estimated from 18 and the uniform susceptibility from \(J_{\rm H}=0.16\) eV of this study. Finally, \(T_{N}\) is the experimental ordering temperature, the weak \(J_{\rm H}\)-dependence of which may deduced from the behavior of the spin gap as a function of \(J_{\rm H}\).
Next, we discuss the properties of the constraint PM solutions. At high temperatures (\(T>T^{\star}\)) CI and MI behave similarly. At low temperatures (\(T<T^{\star}\)) they are distinguished by several characteristics. The self-energy of CI has a Fermi liquid character with vanishing imaginary part at the chemical potential. The self-energy of MI possesses a peak in the vicinity of the chemical potential. This gives rise to distinct band structures shown in Fig. 4. For evolution of the self-energy with temperature see SM [28]. The CI and MI respond differently to a magnetic field. The magnetic susceptibility \(\chi(T)\) of MI, in Fig. 5, exhibits the usual Curie-Weiss decrease with increasing temperature. The high-temperature susceptibility of CI follows the same trend. However, once the Fermi liquid behavior sets in below \(T^{\star}\)[44] the susceptibility starts to drop, which gives rise to a broad maximum. A positive slope of the experimental \(\chi(T)\) above the transition temperature was pointed out by the authors of Ref. [17].
How is the different character of PM phase reflected in the AFM phase? Upon magnetic ordering the self-energy is dominated by the spin-dependent Hartree shift and electronic spectra for large and small \(J_{\rm H}\) in Fig 4 resemble one another. In Fig. 6 we compare the magnon spectra obtained at 464 K for \(J_{\rm H}\) values on both sides of CI/MI crossover. A difference is hardly noticeable. There is a discernible trend of decreasing spin gap with \(J_{\rm H}\), which follows from the behavior of the single-ion anisotropy. Overall the parameters extracted using strong-coupling theory describe the magnons equally well on CI and MI side in the parameter space.
Can the behavior of the CI susceptibility explain the experimentally observed behavior of \(\chi(T)\) in the PM phase? Is it plausible that an improved theory, which pushes the calculated \(T_{N}\) to its experimental value below \(T^{\star}\), uncovers the CI susceptibility? We argue that it is not. The key problem of the DMFT description in the present context is, rather than quantitatively overestimating the magnitude of \(T_{N}\), that it does not qualitatively distinguish between the paramagnetic Neel temperature \(\Theta\) and the ordering temperature \(T_{N}\). In fact the size of \(\Theta\), which describes the onset of strong AFM correlations, is likely to be correctly given by DFT+DMFT as suggested by the correct magnon bandwidth obtained in the calculation. Thefore DMFT does not exaggerate the temperature at which the AFM correlations set in, but that it describes them as static (AFM order), while in the 2D reality they remain dynamical down to much lower temperature \(T_{N}\) determined by a spin gap or an inter-layer coupling. The CI physics can be realized if the crossover temperature \(T^{\star}\) is above the onset of AFM correlations \(\Theta\). In the present case for smaller \(J_{\rm H}\) we get \(T_{N}<T^{\star}<\Theta\) and thus the increase of \(\chi(T)\) above \(T_{N}\) represents the physics of 2D Heisenberg magnet rather than that of CI.
We would like to point out the analogy of the present physics with the Kondo lattice model [45]. In both cases a local moment disappears below a certain temperature, \(T^{\star}\) in CI or Kondo temperature in case of the Kondo lattice, if not correlated to other moments on the lattice. In both case, inter-site correlations between the local moments can preclude their disappearance if sufficiently strong, which we conjecture to mean \(T^{\star}<\Theta\) in the present case. These are examples of a situation when inter-site interaction between the local excited states (carrying the local
Figure 6: Comparison of magnon spectra in covalent insulator (\(J_{\rm H}=0.16\) eV) and Mott insulator (\(J_{\rm H}=0.19\) eV and \(J_{\rm H}=0.22\) eV) phases. The calculations were performed for \(T=464\) K. The spin gaps for \(J_{\rm H}=\{0.16,0.19,0.22\}\) eV are \(\Delta_{\rm m}=\{45,36,35\}\) meV, respectively. The white line is a spectral weight \(\Omega_{\bf q}\).(The same color scale as Fig. 2b)
moments), eliminates the (non-magnetic) local ground states from the set of global low-energy states.
_Conclusions._ We have calculated the spin excitation spectra of SrRu\({}_{2}\)O\({}_{6}\) using DFT+DMFT approach and found a quantitative match with the experimental observations [19], notably for the spin gap due to the spin-orbit coupling. The paramagnetic state of SrRu\({}_{2}\)O\({}_{6}\), depending on the strength of the Hund's coupling \(J_{\rm H}\), exhibits either covalent insulator or Mott insulator characteristics below \(T^{\star}\approx 580\) K. Once in the AFM ordered state the magnon and electron excitation spectra are essentially the same for \(J_{\rm H}\) on both sides of the covalent insulator / Mott insulator crossover. Our calculations for realistic \(J_{\rm H}\) on both sides of the CI/MI crossover lead to the conclusion that \(T^{\star}\) is substantially below the temperature \(\Theta\) at which the AFM correlations set in and therefore the covalent insulator state remains always 'hidden'.
The authors thank H. Suzuki for providing the experimental data of Fig. 2, A. Kauch for critical reading of the manuscript, and K.-H. Ahn for valued discussions in the early stage of this work. This work has received funding from QUAST-FOR5249 project I 5868-N (D.C., J.K.) of the Austrian Science Fund (FWF), Czech Science Foundation (GACR) project No. GA22-28797S (D.C., J.C.), JSPS KAKENHI Grant Numbers 21K13884, 23K03324 (A.H.), 21H01003, 23H03816, 23H03817 (H.S., A.H.), Austrian Federal Ministry of Science, Research and Economy through the Vienna Scientific Cluster (VSC) Research Center and by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254). H.S was supported by JSPS KAKENHI Grant Number 21H01041. H. S. thanks the Supercomputer Center, the Institute for Solid State Physics, and the University of Tokyo for the use of their facilities.
|
2301.03351 | Classifying Mental-Disorders through Clinicians Subjective Approach
based on Three-way Decision | In psychiatric diagnosis, a contemporary data-driven, manual-based method for
mental disorders classification is the most popular technique; however, it has
several inevitable flaws. Using the three-way decision as a framework, we
propose a unified model that stands for clinicians' subjective approach (CSA)
analysis consisting of three parts: quantitative analysis, quantitative
analysis, and evaluation-based analysis. A ranking list and a set of numerical
weights based on illness magnitude levels according to the clinician's greatest
degree of assumptions are the findings of the qualitative and quantitative
investigation. We further create a comparative classification of illnesses into
three groups with varying important levels; a three-way evaluation-based model
is utilized in this study for the aim of understanding and portraying these
results in a more clear way. This proposed method might be integrated with the
manual-based process as a complementary tool to improve precision while
diagnosing mental disorders | Huidong Wang, Md Sakib Ullah Sourav, Mengdi Yang, Jiaping Zhang | 2022-12-21T17:22:02Z | http://arxiv.org/abs/2301.03351v4 | # Classifying Mental-Disorders through Clinicians' Subjective Approach based on Three-way Decisions
###### Abstract
In psychiatric diagnosis, a contemporary data-driven, manual-based method for mental disorders classification is the most popular technique. However, it has several inevitable flaws, namely, misdiagnosis of a complex phenomenon, comorbidities etc. Using the three-way decisions (3WD) as a framework, we propose a unified model that stands for clinicians' subjective approach (CSA) consisting of three parts: quantitative analysis, quantitative analysis, and evaluation-based analysis. A ranking list and a set of numerical weights based on illness magnitude levels according to the clinician's greatest degree of assumptions are the findings of the qualitative and quantitative investigation. We further create a comparative classification of illnesses into three groups with varying important levels; a three-way evaluation-based model is utilized in this study for the aim of understanding and portraying these results in a more clear way. This proposed method might be integrated with the manual-based process as a complementary tool to improve precision while diagnosing mental disorders.
Mental disorder classification Psychiatric diagnosis Three-way decisions
## 1 Introduction
In this modern era, where technology is at its peak, with endless amusement and entertainment scopes, still, a substantial amount of people, mostly young adults, are suffering from depression and other mental disorders [1]. Prevalence can be seen as having a lack of motivation to live, losing interest in everything among common people. Hence, they are frequently thriving towards psychiatric diagnosis than in the past days. Therefore, improper diagnosis of mental health disorders may lead to even more vulnerable consequences in a greater sense from an individual to a social perspective [38]. The traditional form of psychiatric diagnosis is much pretentious nowadays as few recent studies demonstrate several shortcomings within the widely established systems used for classifying mental disorders, namely, bipolar disorder, anxiety disorders, phobias, substance use disorder, mood disorders, and many others [2, 3]. More often these recognized tools, such as DSM-5 [7] and ICD-11 [8], fails to distinguish between the proper and correct disorder diagnosis of a complex phenomenon in individual cases. Patients with the same disorder exhibit diverse symptom profiles during diagnosis [35] and
comorbidities or co-occurring conditions creating numerous clinical and research challenges as well [36]. In such a situation, the pragmatic and expertise-oriented judgment from the clinicians (psychiatrists and psychologists) should be reinforced to avoid an improper diagnosis of a mental disorder and restrict its consequences. While three-way classification has emerged as a prominent problem-solving and decision-making paradigm, we intend to integrate its theory into the classification process of mental disorders in order to help the clinicians' diagnosis process in a more accurate and confident manner.
"Psychiatric nosology" or "psychiatric taxonomy" is terms used to describe how mental diseases are classified. There are presently two commonly used instruments or methods for defining mental disorders: the World Health Organization's (WHO) International Classification of Diseases (ICD-11) and the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM-5). In contrast to the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM), Research Domain Criteria (RDoC) was launched in 2009 with the goal of addressing the heterogeneity in current nosology by providing a biologically-based, rather than symptom-based, a framework for understanding mental disorders [33]. The Chinese Society of Psychiatry (CSP) [9] produced the Chinese Classification of Mental Diseases (CCMD), a clinical reference for diagnosing mental disorders in China. It is now working on the CCMD-3, a third edition was written in both Chinese and English. It is designed to be structurally and categorically identical to the International Classification of Diseases (ICD) and the Diagnostic and Statistical Manual (DSM).
One of the most fundamental flaws in the DSM-5 and other manuals is that they lack culture-specific meaning and do not include the cultural context of a certain nation (for example, Bangladesh). Common people's habits, tastes, life expectations, social behavior is much more distinct and unique in different parts of the world and these changes rapidly. After the emergence of COVID-19 amidst the imposed restrictions of various sorts, the mental health circumstances is in big threat; the symptoms are relapsing in normal population, university students, clinical workers, patients with pre-psychiatric disorders, and others in such a way that makes the situation more complex [39, 40, 41]. In addition, these taxonomies or guides are mostly based on various statistical analyses and information theory, with some cultural representations thrown in for good measure yet backfire to provide us a timely, holistic and unified view on a deeper scale. On the other hand, the broadening of diagnostic criteria in DSM-5, according to critics, may increase the number of'mentally ill' people and/or pathologies 'normal behavior', thus exposing millions of additional patients to pharmaceuticals that may do more damage than good [4]. What is more, the different manual-guided psychiatric diagnoses follow approaches like- categorical, dimensional, and others, those also have their controversy in terms of their validity in many cases [5, 6].
Prior to the introduction of manual-based diagnostic systems (about 1980), the clinician's subjective experience was highly regarded. Although the effectiveness of the method may have increased since the DSM/ICD was introduced, the limitations of this technique are now evident [2, 3, 5, 6, 42, and 43]. A study [44] on clinician's subjective experience supports the resurrection of growing potential on clinician's subjectivity and its promising role in diagnostic process. Other recent studies [39, 40] show evidence that the clinician's subjective experience might play a useful role in the diagnostic process as well.
The term "diagnosis" refers to both a phrase and a procedure that is closely linked to concerns of classification [45]. In conventional psychiatric diagnosis process, a doctor or clinician classify among listed mental disorders by referring to the outlined and data-driven manuals (DSM-5/ ICD-11) that include descriptions, symptoms, and so forth; and by following other diagnostic criteria. This is an objective approach that implies internal information-based analysis and it has been put much of the importance comparatively. Simultaneously, similar importance should be imposed on practitioners' external analysis, namely, culture-specific knowledge along with domain knowledge, through attained expertise and experience with a subjective approach during diagnosis process that has been focused on in the current study, this is shown in Fig.1.
**Fig.1.** A unified framework of mental disorder classification consisting clinicians' subjective approach (CSA).
The primary purpose of this paper is to provide a general framework and adopt concrete methods to analyze clinicians' subjective approach (CSA) in mental disorder classification or psychiatric nosology. The content of this paper is generally arranged into three parts, qualitative analysis of CSA using binary relations, quantitative analysis of CSA using the eigenvector method, and evaluation-based analysis.
**2. Three-Way Decision in Clinicians' Subjective Approach (CSA)**
Yao [10] created the three-way decisions (3WD) theory, which tries to provide a cohesive framework for thinking, problem-solving, and information processing in three dimensions. It gives us a useful foundation for simulating real-world challenges. 3WD has been applied to a variety of fields, including three-way conflict analysis [11, 12], three-way clustering [13, 14, 15, 16], three-way recommender systems [17, 18, 19], three-way concept analysis [20, 21], three-way granular computing [10, 22], three-way face recognition [24], and so on.
We use 3WD theory as our fundamental framework to examine clinicians' subjective approach (CSA) to classify mental disorders in psychiatric diagnosis. CSA is studied using three models: the Trisecting-Acting-Outcome (TAO) model, the three-level computing model, and the evaluation-based approach. As shown in Fig. 1, we undertake a qualitative and quantitative investigation of CSA.
We use CSA to rate a group of mental disorders in the qualitative study. The ranking is based on the order relationships between illness pairings. To model the structure and assess CSA, we apply TAO mode in three-way choices. From the standpoint of a clinician, we compare and assess the relative preference of all disorders in pairs first. Following that, we divide all of these couples into three categories: preferred, neutral, and less favored. Finally, we rank illnesses/disorders in order by using a binary relationship. The eigenvector approach is used in quantitative analysis to calculate disease weights. A disadvantage of the eigenvector technique is that when the number of items exceeds 9, a
large mistake in the computation might occur [25]. By constructing a three-level structure, the three-level computing paradigm is employed to solve this problem. The eigenvector approach is then used to calculate weights from top to bottom numerous times, allowing us to get a large number of disease weights without sacrificing too much precision.
The findings of the qualitative and quantitative study are a ranking list and a set of numerical weights based on the magnitude levels of illnesses based on the clinician's most extreme assumptions. We also built a comparative classification of illnesses into three groups with varying important levels, three-way evaluation-based model is utilized in this study for the aim of comprehending and portraying these results in a more straightforward way.
## 3 Three-Way Qualitative Clinicians' Subjective Approach (CSA) Analysis
Order relations, which is an intuitive sense of ordering things against one another, are a significant implication of binary relations. For example, given that (x, y) is an ordered pair of two elements, we may derive order relations between x and y, such as x being greater than y, x being poorer than y, or x being a component of y in various instances. Order relations are a frequent representation of user preference in decision theory, we write an order relation as \(\;\geq\;\) or \(>\). If x \(\;\geq\;\) y, we say x is at least as good as y, if x \(>\) y, we say x is strictly better than y. We solely focus on the strict order relation of "\(>\)" in this study to develop a clinician's preference-based approach (CPA) later on in a more clear way based on the property of trichotomy.
### Clinicians' Subjective Approach (CSA) and the Property of Trichotomy
The idea of user preference has been intensively investigated in several user-oriented research domains, such as information retrieval [26; 27], economics [28], and social sciences [29]. In qualitative CSA analysis, the concept of user preference theory may be employed, and the feature of trichotomy is crucial. This trait makes order relations useful for modeling a CSA towards a set of illnesses.
Humans are skilled at establishing relative comparisons between numbers, goods, methods, and other things in our daily lives. Given two arbitrary real numbers n and m, we may easily argue that one of nm, n = m, or n \(>\) m must hold in number theory; this is known as the trichotomy property of real numbers. Similarly, a person can identify the ordering relation between x and y as one of the following: x is preferred over y, x is indifferent to y, or x is less favored than y by comparing a pair of things x and y under a specified criterion. Obviously, a person's preferred preference for a pair of things is three. This concept can easily be expanded to include order relations.
If we use an order relation \(>\) to represent the meaning "preferred", the indifference relation \(\sim\) is defined as an absence of \(>\), which is defined as:
\[\mathrm{x}\sim\mathrm{y}\;\Leftrightarrow\;\neg\;\mathrm{(x}>\mathrm{y)}\; \Lambda^{\neg}\;\mathrm{(y}>\mathrm{x)} \tag{1}\]
Give an ordered pair (x, y), if an order relation \(>\) expresses the first element is preferred than the second element. Its converse relation which is written as \(\;\stackrel{{\mbox{\tiny$\blackrightarrow$}}}{{\mbox{\tiny$ \blackrightarrow$}}}\;\), is called a less preferred relation, which is defined as:
\[\mathrm{x}\;\stackrel{{\mbox{\tiny$\blackrightarrow$}}}{{ \mbox{\tiny$\blackrightarrow$}}}\;\mathrm{y}\;\Leftrightarrow\;\mathrm{(y}> \mathrm{x)} \tag{2}\]
We usually write as < if it does not cause any ambiguity.
**Definition 1._An order relation \(>\) on a disorder set \(D\) is called trichotomous if \(\forall\)(x, y), x, y \(\in\) D, exactly one of x \(>\) y, x \(\sim\) y, or x \(<\) y holds._
The purpose of user preference-related research, from the perspective of a decision-maker, is to identify optimum options by examining the order relations among members of a nonempty set, which is characterized as preference relation. The method of user preference theory may be described as first establishing reasonable axioms based on the decision maker's preferences, and then assessing a user's preferring behavior based on those preferences [28]. The mathematical properties of trichotomy and transitivity are used to construct a preference relation.
**Definition 2.**A preference relation, denoted as \(>\), is a special type of binary relation on the set of elements D that satisfies the following two rationality properties. \(\forall\)x, y, z \(\in\) D,
\[\text{Trichotomous: (x > y) }\forall\text{ (x \sim y) }\forall\text{ (x < y)},\] \[\text{Transitive: x > y }\wedge\text{y > z }\Rightarrow\text{ x > z} \tag{3}\]
If we use an order relation \(>\) as a preference relation, user preference is represented as:
\[\text{x > y }\Longleftrightarrow\text{x is preferred than y}\] \[\text{x \sim y }\Longleftrightarrow\text{x is indifferent with y}\] \[\text{x < y }\Longleftrightarrow\text{x is less preferred than y} \tag{4}\]
For a disorder set Ds, we divide all disorder attribute pairs into three classes. Based on this trisection, disorder ranking can be induced. This process is shown in Fig. 2:
\(\{(x,y)\mid x>y\}\)\(\{(x,y)\mid x>y\}\)\(\{(x,y)\mid x-y\}\)\(\{(x,y)\mid x<y\}\)\(
Linear orders, weak orders, and semiorders are the three types of order relations that all have the properties of trichotomy and transitivity. These three order relations are employed in this article to describe the clinician's choice for CSA analysis.
### Modeling of CSA as Linear Order
Given a disorder set Ds, a linear order \(>\) enables us to arrange diseases in the form Ds = {d\({}_{\rm l}\), d\({}_{\rm 2}\),..., d\({}_{\rm n}\)}, such that d\({}_{\rm i}>\) d\({}_{\rm j}\) if and only if I \(<\) j, for this reason, a linear order is also called a chain.
**Definition 3.**Given a set Ds, a binary relation \(>\) is a linear order on Ds, if it satisfies for any x, y, z \(\in\) Ds:
\[\text{Asymmetric: x $>$ y $\Rightarrow$ \neg$ (y $>$ x)},\] \[\text{Transitive: x $>$ y $\wedge$ y $>$ z $\Rightarrow$ x $>$ z},\] \[\text{Weakly Complete: x $\neq$ y $\Rightarrow$ (x $>$ y) $\vee$ (y $>$ x)} \tag{5}\]
The asymmetric feature precludes the circumstance in which d\({}_{\rm i}\) is better than d\({}_{\rm j}\) and d\({}_{\rm j}\) is better than d\({}_{\rm i}\) at the same time. Reasonable inference may be applied thanks to the transitive property. Weak completeness assures that all illnesses are comparable to one another.
**Example 1**.Given a set of disorder Ds = {d1, d2, d3, d4, d5}, a clinicians' preference in accordance of the assumptions on a patient having a potential disorder on Ds is defined by a linear order \(>\). Suppose the ordering between disorders is specified by a clinician as:
\[\text{d1 }\text{ }\text{ }\text{ }\text{ }\text{ d5, d1 }\text{ }\text{ }\text{ }\text{ }\text{ d4, d1 }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ d2, d3 }\text{ }\text{ }\text{ }\text{ }\text{ d1, d3 }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ d2,}\] \[\text{d3 }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ d4, d3 }\text{ }\text{ }\text{ }\text{ }\text{ d5, d5 }\text{ }\text{ }\text{ d4, d5 }\text{ }\text{ }\text{ }\text{ d2, d4 }\text{ }\text{ }\text{ }\text{ }\text{ d2}.\]
Then, disorders are ranked as:
\[\text{d3 }\text{ }\text{ }\text{ d1 }\text{ }\text{ }\text{ }\text{ d5 }\text{ }\text{ }\text{ }\text{ }\text{ d4 }\text{ }\text{ }\text{ }\text{ }\text{ d2}.\]
### Modeling of CSA as Weak Order to Illustrate Comorbidity in Mental Disorder
Weak orders are commonly utilized in several disciplines to indicate user preference relations [26, 27, 28, and 29]. A weak order enables ties in the ranking results, as opposed to a linear order that places items in a chain, which is quite powerful in representing real-world issues. To put it another way, some properties in a collection may be regarded as indifferent.
In mental disorder classifications, comorbidity of psychiatric illnesses is a widespread issue with major consequences for health-care delivery [36]. Depression, anxiety, and drug dependency disorders are the most common comorbid mental illnesses [37]. Here, we used \(\sim\) to denote comorbidity of mental disorders and weak ordered relation \(>\) to denote ranking of clinician's preference among disorders.
**Definition 4.** A weak order \(>\) is a binary relation on set Ds, if it satisfies for any x, y \(\in\) Ds:
\[\text{Asymmetric: x $>$ y $\Rightarrow$ \neg$ (y $>$ x)},\]
**Example 2.**Given a set of disorder Ds = {d1, d2, d3, d4, d5}, a clinician's preference on Ds is defined by a weak order \(>\). Suppose the ordering between disorders is specified as:
\[d1\ \ >\ d3,\ d1\ \ >\ d4,\ d1\ \ >\ d5,\ d2\ \ >\ d3,\ d2\ \ >\ d4,\ d2\ \ >\ d5,\ d3\ \ >\ d4,\ d3\ \ >\ d5.\]
Because the clinician neither preferences d1 to d2, nor prefer d2 to d1, so d1 must be in comorbid condition or indifferent with d2, written d1 \(\sim\) d2. That means the clinician suspects the particular patient has disease d1 and d2 at the same time. Similarly, d4 \(\sim\) d5. By considering the above ordering, we can rank disorder attributes like:
\[d1\sim d2\ \ >\ d3\ >\ d4\sim\ d5.\]
### Modeling of CSA as Semiorder
In fact, a transitive indifference relationship isn't always the case. A reader may assume that books C and D are equally good, as are books D and E, after reading three novels, yet he can know that he prefers C to E based on his intuition after reading three books. To put it another way, the individual's preferring attitude cannot discriminate neither between C and D, nor between D and E, but he can distinguish between C and E. To model this type of situation, Luce [31] proposed semiorders.
**Definition 5**. A semiorder \(>\) on a set Ds is a binary relation which satisfies for any x, x', x", y, y' \(\in\) Ds:
\[\text{Asymmetric: x $>$ y}\Rightarrow\neg\ (y>x),\] \[\text{Ferrers: (x $>$ x') \land(y $>$ y')}\Rightarrow(x>y')\ \lor(y>x'),\] \[\text{Semi transitive: (x $>$ x') \land(x'>$ x'')}\Rightarrow(x>y)\ \lor(y>x'') \tag{7}\]
**Example 3**. Given a set of disorder Ds = {d1, d2, d3, d4, d5}, a clinician's preference on Ds is defined by a semiorder \(>\). Suppose the ordering between disorders is specified as:
\[d1\ \ >\ d2,\ d1\ \ >\ d3,\ d1\ \ >\ d4,\ d1\ \ >\ d5,\ d2\ \ >\ d4,\ d2\ \ >\ d5,\ d3\ \ >\ d5,\ d4\ \ >\ d5.\]
The clinician neither prefers d2 to d3, nor prefer d3 to d2, so d2 \(\sim\) d3, similarly we can get d3 \(\sim\) d4, however, the indifference is intransitive, because d2 \(>\) d4. So, we cannot rank all disorders in one order but several, like below:
\[d1\ \ >\ d2\ >\ d4\ >\ d5,\] \[d1\ \ >\ d2\sim d3\ >\ d5,\] \[d1\ \ >\ d3\sim d4\ >\ d5.\]
## 4 Three-Way Quantitative Clinicians' Subjective Approach (CSA) Analysis
Mathematically, quantitative CSA analysis can be considered as a process of mapping each disorder to a numerical value,
(7)
where Ds is a set of disorders, R is a real number set, and w is a mapping function that calculates or assigns a numerical value to each disorder. For a disorder d \(\in\) Ds, w(d) represents its weight from the perspective of a clinician.
### Formulating a Three-Level Structure
This study offered two methods for calculating or assigning numerical weights to each ailment. The first is calculating weights using the eigenvector approach, which is covered in Sect. 4.2. The second method is to assign weights. To be more explicit, we first use the eigenvector method to construct an important scale with numerical weights, and then we compare each disease to this scale to determine its weight; this methodology is detailed in Sect. 4.3. Obviously, the eigenvector technique is vital in both approaches; however, it has a limitation in that it is not suitable when the number of objects is greater than 9 since large mistakes in the computation would be introduced [25]. We use the 3WD theory to solve this problem. More specifically, the issue is divided into three levels, after which the eigenvector approach is used to calculate weights from top to bottom. The three-level structure allows us to limit the number of items in the computation of the weights to no more than 9, allowing us to compute weights using the eigenvector approach without sacrificing too much precision.
### Three-Way Quantitative Disease Weighting Based on Eigenvector Method
Figure 3 depicts the framework of the quantitative illness weighting model. Assume we have a disorder set Ds, where d\({}_{\rm ij}\) denotes a disorder at the lowest level. We create a three-level framework by categorizing illnesses into various groups based on semantic significance.
The second step is to use the eigenvector approach to calculate from top to bottom after we have this three-level structure. We create cluster weights based on clinician choice, and then we determine the weights of illnesses inside each cluster based on cluster weight.
The following is a description of how to calculate weights using the eigenvector approach. Assume that a disorder collection Ds has been divided into n clusters, n 9, with no more than 9 illnesses
Figure 3: The structure of the three-level disease weighting method.
in each cluster. We establish a comparison matrix M as specified in Definition 6 to produce a weight vector w = (w\({}_{1}\), w\({}_{2}\), \(\cdots\),w\({}_{n}\)) for clusters, where element m\({}_{ij}\) reflects the relative significance of a cluster Di compared to a cluster Dj.
**Definition 6**. A comparison matrix M is a square matrix of order n, whose elements are m\({}_{ij}\), M is a positive reciprocal matrix if M is:
\[\begin{array}{l}\mbox{Positive: $\forall$i, j <n, m${}_{ij}>0$,}\\ \mbox{Reciprocal: $\forall$i, j <n, m${}_{ij}=1/m${}_{ji}$.}\end{array} \tag{9}\]
Where, \(\forall i,j\) (i, j=1, 2\(\ldots\) n).
M is a comparison matrix that looks like below, and in a perfect situation, m\({}_{ij}\) should exactly be the weights ratio of a cluster Di compared with Dj.
\[\mbox{M}=\left(\begin{array}{cccc}m_{11}&m_{12}&m_{1n}\\ m_{21}&m_{22}&m_{2n}\\.&.&.\\.&.&.\\.&.&.\\.&.&.\\ \mbox{$m_{n1}$}&m_{n2}&m_{m}\\ \end{array}\right)=\left(\begin{array}{cccc}w_{1}\mbox{ }/\ w_{1}&w_{1}/w_{2}\...w_{1}/w_{n}\\ w_{2}\mbox{ }/\ w_{1}&w_{2}/w_{2}\...w_{2}/w_{n}\\.&.&.\\.&.&.\\.&.&.\\.&.&.\\ \mbox{$w_{n}$}\mbox{ }/\ w_{1}&w_{n}/w_{2}\...w_{n}/w_{n}\\ \end{array}\right) \tag{10}\]
In practice, the values of components in a comparison matrix are determined by the user's preference and flexibility. We use the 9-point rating scale established by Saaty [25] to precisely determine the weight ratio w1/w2 between two clusters (see Table 1).
Table 1 shows that the number 1 denotes that two clusters are equally essential. An arbitrary cluster should be equally significant to itself, hence the value mii of the major diagonal in a comparison
\begin{table}
\begin{tabular}{c|l|l} \hline Intensity of & Definition & Explanation \\ Importance & & \\ \hline
1 & Equal importance & Two activities contribute equally to the objective \\ \hline
3 & Weak importance of one over & Experience and judgment slightly favor one activity over another \\ \hline
5 & Essential or strong importance & Experience and judgment strongly favor one activity over another \\ \hline
7 & Demonstrated importance & An activity is strongly favored and its dominance demonstrated in practice \\ \hline
9 & Absolute importance & The evidence favoring one activity over another is of the highest possible order of affirmation \\ \hline
2,4,6,8 & Intermediate values between the two adjacent judgments & When compromise is needed \\ \hline \end{tabular}
\end{table}
Table 1: The Saaty’s 9-points rating scale [25]
matrix must be 1. Furthermore, for two clusters a and b, the weight ratio wa/wb should be larger than 1 if an is favored over b, else it should be equal to or less than 1.
We may get the matrix equation as follows under ideal conditions:
\[\mathrm{Mw}=\left(\begin{array}{ccc}w_{1}\ /\ w_{1}&w_{1}/w_{2}\...w_{1}/w_{n}\\ w_{2}\ /\ w_{1}&w_{2}/w_{2}\...w_{2}/w_{n}\\.&.&.\\.&.&.\\.&.&.\\.&.&.\\ w_{n}\ /\ w_{1}&w_{n}/w_{2}\...w_{n}/w_{n}\end{array}\right)_{\mathbf{X}} \left(\begin{array}{c}w_{1}\\ w_{2}\\.\\.\\.\\.\\.\\.\\.\\.\\ \mathbf{x}\ \
Because C.R.\(\leq 10\%\), which satisfies consistency checking, the eigenvector of comparison matrix can be used as the weights of {D1, D2, D3, D4, D5, D6}, that is:
\[\text{w}=(0.140,\,0.041,\,0.290,\,0.038,\,0.071,\,0.420)\]
We utilize the weights of clusters as a starting point and use the same method to compute the weights of illnesses in each cluster. The weights of illnesses are then normalized by dividing them by the weights of respective clusters. Finally, we may calculate illness weights.
### A Quantitative Disease Weighting Method Using an Importance Scale
It's a simple approach for a doctor to assign numerical numbers to illnesses as weights depending on his or her own viewpoint. When the number of suspected illnesses is enormous, however, fluctuation in judgment is unavoidable, resulting in low accuracy in the conclusion. In light of this, an importance scale is employed to solve the problem [25].
The disease weighting approach employing a significance scale may be broken down into the three parts below. First, intensities of the preference degree of illness orders are grouped into several degrees from a clinician's perspective, such as significantly matched, matched, moderately matched, weakly matched, and not matched. We can then calculate weights for each intensity degree using the eigenvector approach described in Sect. 4.2. A three-level structure is required when the number of intensity degrees exceeds 9. As a result, we create a importance scale to aid our judgment. Finally, the weights of illnesses are calculated using this scale.
**Example 5.**Suppose a clinician sets five intensities of the preferential degree of suspected disorders, which are A: significantly matched, B: matched, C: moderately matched, D: weakly matched, E: not matched. A clinician builds a comparison matrix of these intensities, and the weights calculation of intensities is described as (Table 4) [34]:
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline & A & B & C & D & E & Weight \\ \hline A & 1 & 2 & 3 & 5 & 9 & 0.450 \\ \hline B & \(\nicefrac{{1}}{{2}}\) & 1 & 2 & 4 & 6 & 0.277 \\ \hline C & 1/3 & \(\nicefrac{{1}}{{2}}\) & 1 & 2 & 3 & 0.147 \\ \hline D & 1/5 & \(\nicefrac{{1}}{{4}}\) & \(\nicefrac{{1}}{{2}}\) & 1 & 2 & 0.081 \\ \hline E & 1/9 & 1/6 & 1/3 & \(\nicefrac{{1}}{{2}}\) & 1 & 0.046 \\ \hline \multicolumn{3}{l}{\(\lambda\)_max_= 5.024} \\ \multicolumn{3}{l}{C.R = 0.533\%} \\ \hline \end{tabular}
\end{table}
Table 4: A pairwise comparison matrix of intensity levels
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l} \hline & D1 & D2 & D3 & D4 & D5 & D6 & Weight \\ \hline D1 & 1 & 3 & 1/2 & 4 & 2 & 1/3 & 0.140 \\ \hline D2 & 1/3 & 1 & 1/7 & 1 & \(\nicefrac{{1}}{{2}}\) & 1/9 & 0.041 \\ \hline D3 & 2 & 7 & 1 & 9 & 5 & \(\nicefrac{{1}}{{2}}\) & 0.290 \\ \hline D4 & \(\nicefrac{{1}}{{4}}\) & 1 & 1/9 & 1 & \(\nicefrac{{1}}{{2}}\) & 1/9 & 0.038 \\ \hline D5 & \(\nicefrac{{1}}{{2}}\) & 2 & 1/5 & 2 & 1 & 1/6 & 0.071 \\ \hline D6 & 3 & 9 & 2 & 9 & 6 & 1 & 0.420 \\ \hline \multicolumn{3}{l}{\(\lambda\)_max_= 6.048} \\ \hline \multicolumn{3}{l}{C.R = 0.762\% \(<10\%\)} \\ \hline \end{tabular}
\end{table}
Table 3: Weights calculation of six clusters
Because the consistency check is complete, the weights of these intensities are used to construct an important scale. Then, using this scale, we compare each property one by one, assigning various weights to each disorder attribute from the clinician's perspective.
## 5 Three-Way Evaluation based CSA Analysis
The 3WD [30] is based on dividing the universe into three zones and employing various tactics in each. The result of a qualitative or quantitative CPA analysis is a ranking list or a set of numerical weights that are significant but difficult for a physician to use in making a choice. These findings will be processed and classified into three pair-wise disjoint classes with varying levels of relevance, namely high importance, medium importance, and low importance, in this part. We'll refer to these three classes as H, M, and L for the rest of this study. We chose three classes because human cognition and problem-solving rely on a three-way divide, which allows us to convert complexity into simplicity in a variety of scenarios [23].
### Trisecting a Disorder Set based on Thresholds
Research Domain Criteria (RDoC), considering the possibility of increasing need of constructing various thresholds for different purposes, is trying to gather information consistently to set thresholds in diagnostic systems of mental disorder where this is relevant; especially in particular research purpose or applications in clinical settings or in health policymaking [33]. Our current study acknowledges this concern and suggests insightful paths on formulating thresholds in mental disorder classification while in the diagnosis process.
Using two percentiles is one method for trisecting a disorder set. The first step is to convert a linear order \(>\) from a qualitative or quantitative analytical result. This phase can be bypassed if the outcome of the qualitative method is based on linear order. To identify the three areas, the second step is to use a pair of thresholds, based on the percentiles.
There are various methods for linearly transforming qualitative and quantitative findings. The first is topological sorting, which states that an element will not appear in a ranking list until all other items that are preferable to it have been listed [32]. We can generate a decreasing ranking list by utilizing topological sorting. Another option is to use an assessment function to convert qualitative and quantitative analytical results into a set of diseases' evaluation status values (ESVs). The ESV of disease d is defined as follows:
\[\text{v(d)}=\frac{|\{x\in Ds|d>x\}|}{|\,Ds|} \tag{14}\]
Illnesses will be sorted in decreasing order depending on their ESVs, with diseases with the same ESV being listed in any order.
Now, we have a list of ESVs, which is in the form of v\({}_{1}\), v\({}_{2}\)..., v\({}_{n}\) where v\({}_{1}\) is the largest value and v\({}_{n}\) is the smallest value. Using the ranking lists of the above two methods, we then adopt two ESVs at \(a^{\text{th}}\) and \(\beta^{\text{th}}\) percentiles with 0 \(<\)\(\beta\)\(<\)\(a\)\(<\) 100 to calculate a pair of thresholds \(h\) and \(l\) as:
\[\begin{array}{l}h=\text{v}_{\text{ [{\rm{fm}}/100]}}\\ l=\text{v}_{\text{ [{\rm{fm}}/100]}}\end{array} \tag{15}\]
Where the ceiling function gives the smallest integer that is not less than x, and the floor function gives the largest integer that not greater than x. The floor and ceiling functions are necessary for the reason that \(\alpha\)m/100 and \(\beta\)m/100 may not be integers [30].
Three regions, H, M, and L, may be created using the descending ranking list and two thresholds. Disorders in the H region are of high priority, disorders in the M region are of moderate priority, and disorders in the L zone are of low priority.
### Trisecting a Disease Set Based on a Statistical Method
Yao and Gao [30] examined the statistical procedure of building and evaluating three areas. Mean and standard deviation are statistical methods for examining numerical numbers that may be applied to the findings of a quantitative CPA study. Suppose w(d\({}_{1}\)),w(d\({}_{2}\)),...,w(dn) are the weights of disorders in Ds, n is the cardinality of Ds, the mean and standard deviation is calculated by:
\[\mu=\frac{1}{n}\sum_{i=1}^{n}w(a_{i})\quad, \tag{16}\]
\[\sigma=\left(\frac{1}{n}\sum_{i=1}^{n}\left(w(a_{i})-\mu\right)^{2}\right)^{ \frac{1}{2}}\quad, \tag{17}\]
We use two non-negative numbers k\({}_{1}\) and k\({}_{2}\) to represent the position of thresholds away from the mean, then a pair of thresholds is determined as [30]:
\[\mathrm{h}=\mu+k_{1}\sigma,\,k_{1}\geq 0,\] \[\mathrm{l}=\mu-k_{2}\sigma,\,k_{2}\geq 0. \tag{18}\]
Based on thresholds h and l, three regions of a disorder set can be constructed as:
\[\mathrm{H}_{\left(k1,\,k2\right)}(\mathrm{w}) =\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\geq\mathrm{h}\right\}\] \[=\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\geq\mu+k_{1}\sigma\right\},\] \[\mathrm{M}_{\left(k1,\,k2\right)}(\mathrm{w}) =\left\{x\in\mathrm{Ds}\ |\mathrm{l}<\mathrm{w(x)}<\mathrm{h}\right\}\] \[=\left\{x\in\mathrm{Ds}|\mu-k_{2}\sigma<\mathrm{w(x)}<\mu+k_{1} \sigma\right\},\] \[\mathrm{L}_{\left(k1,\,k2\right)}(\mathrm{w}) =\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\leq\mathrm{l}\right\} \tag{19}\] \[=\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\leq\mu-k_{2}\sigma\right\}\]
Disorders can be categorized into three regions H, M, and L considering their weights.
## 6 Discussion and Future Directions
We essentially emphasized that the mental symptoms and indicators are passively observed subjects in the paradigm that this research proposes. However, in terms of phenomenological psychopathology, clinicians can also examine patient symptoms and signs using emphatic techniques [46]. Additionally, mental disorders are defined with an operational manner in mainstream diagnostic
systems (such as ICD-11 and DSM-5) but are not based on biological indicators. The psychiatric diagnoses therefore correspond to the practical or fuzzy kinds but not the natural kinds. What is more, the operational definitions are relevant to language games in terms of Wittgenstein's philosophy of language [47]. Specifically, the instances with a single diagnosis might be linked by a chain of meanings rather than being supported by a single biological foundation in such disease essentialism. With the addition of psychiat-patient interaction (i.e., psychiatist's emphatic approach) and/or its influences on the classification as future work or extension of this current study, the proposed classification model of mental disorders through clinicians' subjective approach on 3WD can be further modified.
Different paradigms, such as pointing graphs, can be used to develop the disorder ranking procedure. To calculate the weight value of each disorder cluster, analytic hierarchy process can be adopted in lieu of eigenvector method and the results can be compared. The quantification of the weight values for diseases using a three-level structure is required in future by clinicians when the number of intensity degrees exceeds 9. For the time being, this current study is offering the theoretic approach to solve this complex problem related to psychiatric diagnosis. Practical implications in both the qualitative and quantitative perspectives should be explored in further studies to ensure the proposed method is better than other existing methods.
## 7 Conclusions
The most widely used method of mental disorder classification is a data-driven manual-based method, yet it has a number of drawbacks. We offer a three-part unified model for clinicians' subjective approach (CSA) analysis, based on the three-way choice. In the qualitative research, we use binary relations and the TAO model to rank mental diseases based on their criteria-based preferences. The quantitative analysis employs the three-level computing paradigm, and the eigenvector technique is utilized to assign numerical weights to mental diseases. Finally, we categorize the results of the qualitative and quantitative investigations into three groups depending on their relative importance.
|
2309.11163 | Composition-dependent absorption of radiation in semiconducting MSi2Z4
Monolayers | The recent synthesis of MoSi2N4 material, along with theoretical predictions
encompassing the entire family of chemical analogs, has opened up a new array
of low-dimensional materials for a diverse range of optoelectronics and
photovoltaics applications. In this study, we conducted state-of-the-art
many-body first-principles calculations to analyze the quasi-particle
electronic structure of the material class MSi2Z4 (where M = Mo, W, and Z = N,
P, As, Sb). All monolayers display a direct band gap at the K point, with the
exception of MoSi2N4. In tungsten-based compounds, the fundamental-gap can be
adjusted over a significantly broader energy range compared to their
molybdenum-based counterparts. Additionally, increasing atomic weight of the Z,
both the band gap and exciton binding energies decrease. A noteworthy feature
is the absence of a lateral valley ({\Lambda} or Q) near the conduction band
minimum, indicating potential higher photoluminescence efficiencies compared to
conventional transition-metal dichalcogenide monolayers. The optical spectra of
these materials are predominantly characterized by tightly bound excitons,
leading to an absorption onset in the visible range (for N-based) and in the
infrared region (for others). This diversity offers promising opportunities to
incorporate these materials and their heterostructures into optoelectronic
devices, with tandem solar cells being particularly promising. | Muhammad Sufyan Ramzan, Tomasz Woźniak, Agnieszka Kuc, Caterina Cocchi | 2023-09-20T09:20:11Z | http://arxiv.org/abs/2309.11163v1 | # Composition-dependent absorption of radiation in semiconducting MSiZ\({}_{4}\) monolayers
###### Abstract
We present a new characterization of the \(\mathrm{MSi_{2}Z_{4}}\) monolayers of
###### Abstract
The recent synthesis of MoSi\({}_{2}\)N\({}_{4}\) material, along with theoretical predictions encompassing the entire family of chemical analogs, has opened up a new array of low-dimensional materials for a diverse range of optoelectronics and photovoltaics applications. In this study, we conducted state-of-the-art many-body first-principles calculations to analyze the quasi-particle electronic structure of the material class MSi\({}_{2}\)Z\({}_{4}\) (where M = Mo, W, and Z = N, P, As, Sb). All monolayers display a direct band gap at the K point, with the exception of MoSi\({}_{2}\)N\({}_{4}\). In tungsten-based compounds, the fundamental-gap can be adjusted over a significantly broader energy range compared to their molybdenum-based counterparts. Additionally, increasing atomic weight of the Z, both the band gap and exciton binding energies decrease. A noteworthy feature is the absence of a lateral valley (\(\Lambda\) or Q) near the conduction band minimum, indicating potential higher photoluminescence efficiencies compared to conventional transition-metal dichalcogenide monolayers. The optical spectra of these materials are predominantly characterized by tightly bound excitons, leading to an absorption onset in the visible range (for N-based) and in the infrared region (for others). This diversity offers promising opportunities to incorporate these materials and their heterostructures into optoelectronic devices, with tandem solar cells being particularly promising.
Introduction
Graphene was successfully exfoliated from its bulk form in 2004,[1] sparking a surge of interest in other two-dimensional (2D) materials, which offer a variation of electronic properties.[2, 3, 4, 5, 6, 7, 8, 9, 10, 11] The current catalogue of 2D materials offers a diverse range of insulating, semiconducting, semi-metallic, and metallic monolayers (MLs).[12, 13, 14, 15, 16, 17] Among the reported 2D materials, transition metal dichalcogenides (TMDCs) have been extensively studied due to their direct band gaps, high carrier mobilities, and stability in ambient conditions.[18, 4, 19] However, their optoelectronic properties are limited by the presence of a lateral valley, so-called \(\Lambda\) or Q, near the conduction band minimum (CBM), which provides non-radiative recombination sites for the excited carriers, thus, reducing the photoluminescence efficiency.[20, 21, 22, 23] Several methods have been proposed to suppress this non-radiative recombination channel and, thus, enhance the quantum yield of TMDC MLs,[24, 25, 22] but despite these efforts, additional research is needed to make them ready for commercial optoelectronic applications. One possible strategy to overcome current limitations of these materials is to interface them with other organic and inorganic semiconductors with smaller band gap to form so-called "tandem stacks" that lead to improved solar cell efficiency by means of photon upconversion.[26, 27] However, despite the efforts devoted to improve the performance of TMDCs, the search for other 2D semiconductors is still an active area of research.
Recently, a new 2D material, MoSi\({}_{2}\)N\({}_{4}\), with a crystal structure analogous to TMDCs, was synthesized using chemical vapor deposition method[28]. It belongs to \(P\overline{6}m2\) space group and has a thickness of seven atomic planes with MoN\({}_{2}\) layer sandwiched between two layers of SiN, see **Figure 1**. MoSi\({}_{2}\)N\({}_{4}\) has an indirect bandgap of 1.94 eV and exhibits excitonic transitions at 2.21 eV (the so-called A resonance) and 2.35 eV (B resonance) originating from the spin-splitting of the valence band maximum (VBM), similar to TMDCs.[29, 30, 28] Moreover, MoSi\({}_{2}\)N\({}_{4}\) has high electron (270 cm\({}^{2}\) V-\({}^{1}\)s-\({}^{1}\)) and hole (1200 cm\({}^{2}\) V-\({}^{1}\)s-\({}^{1}\)) mobilities resulting in a high on-off ratio of 4000 at 77 K in a field-effect transistor.[28] A recent theoretical study has demonstrated that its band edges are well protected by the local environment.[31] Moreover, density functional theory calculations predicted an entire class of 2D analogs, with a general formula of MA\({}_{2}\)Z\({}_{4}\), where M indicates Group-2 or transition metals, A stands for elements belonging to Group-13 or -14, and Z species
of Group-15 or -16. Similar to TMDCs, different structural phases of \(\mathrm{MSi_{2}Z_{4}}\) have also been proposed and investigated.[32]
So far, numerous ground-state calculations for several members of this new material class have been reported.[33, 34, 35, 28, 36] Most of these studies showed that \(\mathrm{MA_{2}Z_{4}}\) MLs have direct band gaps at the high-symmetry point K, with the exception of \(\mathrm{MoSi_{2}N_{4}}\) and \(\mathrm{WSi_{2}N_{4}}\) that both exhibit indirect band gaps with VBM at \(\Gamma\) point. It is worth highlighting that, unlike the TMDCs, \(\mathrm{MSi_{2}Z_{4}}\) do not have the \(\Lambda\) valley near the CBM, which is responsible for detrimental electron-hole recombination in TMDCs, as discussed above. The absence of the \(\Lambda\) valley between \(\Gamma\) and K suggests these materials as potential candidates for optoelectronics and photovoltaics. However, a detailed investigation of the electronic and optical characteristics of these systems based on state-of-the-art _ab initio_ methods is necessary to substantiate this intuitive claim. To date only a few reliable studies of this kind are available in the literature.[37, 38, 39, 31]
In this work, we present a systematic study of the electronic and excitonic properties of \(\mathrm{MSi_{2}Z_{4}}\) (M = Mo, W and Z = N, P, As, and Sb) family of MLs carried out in the framework of density functional theory (DFT) and many-body perturbation theory (GW approximation and Bethe-Salpeter equation). We analyze the trends obtained at varying composition for the electronic and optical band gaps as well as for the excitons and their binding energies, identifying trends that enable engineering of these materials and their properties to maximize their optical performance. We find that the optical onset of the N-based MLs is in the visible region, while for the others, it is shifted to the infra-red (IR), suggesting intriguing perspectives for efficient tandem solar cells based on vertically stacked heterostructures of these systems.
### 2.1 Results and Discussion
### 1. Structural properties
\(\mathrm{MSi_{2}Z_{4}}\) ML systems have a hexagonal crystal structure belonging to the \(\mathrm{D_{2h}}\) point group. Its unit cell includes seven atomic layers, with \(\mathrm{MZ_{2}}\) layer sandwiched between two Si-Z layers (see **Figure 1a**, b). The optimized in-plane lattice constants \(a=b\) (see **Table 1**) are sensitive to the Z
elements and increase monotonically as their atomic mass increases. Similar to the TMDCs in the 2H phase, MLs with the same Z element, but different metal atom M, have nearly identical lattice constants, such that, e.g., MoSi\({}_{2}\)P\({}_{4}\) and WSi\({}_{2}\)P\({}_{4}\) have the same lattice constant equal to 3.454 A. This interesting feature promises the creation of Mo- and W- based heterostructures without lattice mismatch. Our calculated lattice constants agree well with earlier experimental and theoretical reports[31, 36, 38]. All the MLs are predicted to be mechanically stable[31, 36, 28]. The first Brillouin zone is hexagonal (see **Figure 1c**) with the usual high symmetry points \(\pm\)K indicating valleys with opposite polarization (**Figure 1d**).
### Electronic properties
The band structures of the considered MSi\({}_{2}\)Z\({}_{4}\) monolayers are shown in **Figure 2**. These results, obtained at the PBE level including SOC, exhibit an underestimated band gap, due to the known shortcomings of this semi-local functional. Yet, the qualitative picture provided by these results is consistent with the one delivered by more sophisticated methods such as GW, see Figure S7
Figure 1: a - b Top and side views of the MSi\({}_{2}\)Z\({}_{4}\) ML atomic structure with the unit cell boundaries marked by dashed lines. (c) Bruillion zone with the path connecting high symmtery points indicated in light blue. (d) Schematic depiction of \(\pm\)K valley polarization.
and S8. The VBM and CBM of all monolayers are situated at the K point, thus giving rise to direct band gaps, except for MoSi\({}_{2}\)N\({}_{4}\) and WSi\({}_{2}\)N\({}_{4}\) which have an indirect band gap with the VBM at the \(\Gamma\) point. In these materials, the highest valence band has a maximum at the K point 234 meV (for MoSi\({}_{2}\)N\({}_{4}\)) and 50 meV (for WSi\({}_{2}\)N\({}_{4}\)) below the VBM. In all other monolayers, the K valley is always higher than the \(\Gamma\) valley and their difference increases for heavier Z elements. We also notice a dependence of the SOC splitting of the valence and conduction bands on the composition of the MLs. In particular, heavier Z elements increase the VBM splitting, leading to an overall decrease of the band gap. The conduction-band splitting is only noticeable for MSi\({}_{2}\)As\({}_{4}\) and MSi\({}_{2}\)Sb\({}_{4}\), but it is still below 35 meV even for the latter system including the heavier element Sb. The corresponding values of band gaps and spin splitting are given in **Table 1**. The geometrical and electronic parameters and their trends agree with previous studies[31, 32, 36].
\begin{table}
\begin{tabular}{c c c c c c c c} & & & **SCe-** & & Band gap (**GV**) & & S8.E \\ System & \(\alpha\) (\(\hat{\alpha}\)) & \(\sigma\) = \(\hat{\rho}\) (\(\hat{\kappa}\)) & splitting of & & & & \\ & & & **y/s (inety)** & PBE & GW & O-Ree & (SJ) \\ MoSi\({}_{2}\)N\({}_{4}\) & 7.0 & 2.900 & 130/3 & 1.781 (2.015) & 2.720 & 2.470 & 0.389 \\ MoSi\({}_{2}\)P\({}_{4}\) & 9.4 & 3.454 & 138/4 & 0.620 & 1.067 & 0.859 & 0.208 \\ MoSi\({}_{2}\)As\({}_{4}\) & 9.9 & 3.597 & 181/16 & 0.528 & 0.881 & 0.693 & 0.188 \\ MoSi\({}_{2}\)Sb\({}_{4}\) & 10.9 & 3.879 & 226/25 & 0.263 & 0.495 & 0.380 & 0.115 \\ WSi\({}_{2}\)N\({}_{4}\) & 7.0 & 2.889 & 400/10 & 2.110 (2.160) & 3.047 & 2.624 & 0.423 \\ WSi\({}_{2}\)P\({}_{4}\) & 9.4 & 3.454 & 439/6 & 0.300 & 0.652 & 0.452 & 0.200 \\ WSi\({}_{2}\)As\({}_{4}\) & 9.9 & 3.599 & 503/25 & 0.211 & 0.467 & 0.291 & 0.176 \\ WSi\({}_{2}\)Sb\({}_{4}\) & 10.9 & 3.884 & 510/19 & 0.031 & 0.178 & 0.019 & 0.159 \\ \end{tabular}
\end{table}
Table 1: Optimized lattice constants of the hexagonal unit cells of the MSi\({}_{2}\)Z\({}_{4}\) MLs, \(\alpha\) = \(\hat{b}\), layer thickness \(d\), SOC-induced splitting of the highest valence band (\(\nu\)) and lowest conduction band (\(\hat{\kappa}\)) at the K point, PBE and GW gap (direct gap in parenthesis if the fundamental gap is indirect), as well as optical band gap corresponding to the lowest bright excitation predicted by the BSE, and its binding energy (B.E.).
The most striking feature in the electronic structures of MSi\({}_{2}\)Z\({}_{4}\) is the absence of the \(\Lambda\) valley between the high-symmetry points \(\Gamma\) and K near the CBM, in contrast to conventional TMDCs, [18, 19, 40, 41] see Figure 2 and S1. It is worth noting that a feature analogous to the \(\Lambda\) valley is still present in the band structure of the MSi\({}_{2}\)N\({}_{4}\) (see Figure 2), but it is energetically much higher than the CBM at K compared to the \(\Lambda\)-valley in the TMDCs. The residual presence of this band minimum in the conduction region of the MSi\({}_{2}\)N\({}_{4}\) can be explained by the largely different electronegativities of Si (1.8) and N (3.0) compared to Si and the heavier Z elements. For the same reason, an additional valley also appears at the M point, which is dominated by the Si and N p orbitals (see Fig. S5 in ref. [36]). For W-based MLs, these two valleys are closer to the CBM than in the Mo-based ones. In all other MLs, with heavier Z elements than N, the \(\Lambda\) and M valleys disappear and a new valley appears (CBM+1) above the CBM, composed of \(p\) and \(d\) orbitals of the Z and M atoms, respectively.[36] Furthermore, with heavier Z elements, the spin-splitting of CBM+1 valley increases and the energy difference between CBM and CBM+1 valley at K decreases. The \(\Lambda\) valley, when energetically close to the K valley, provides nonradiative recombination channels, and hence, suppresses the photoluminescence quantum yield.[42, 43, 41, 25] Due to the absence of this feature in monolayers MSi\({}_{2}\)Z\({}_{4}\), excellent optical performance of these systems can be anticipated.
It is worth noting that for the Mo- based monolayers, the dispersion of valence valleys at the K point changes insignificantly when going towards heavier Z atoms, but the dispersion of conduction valleys at K drastically increases from N to Sb. As mentioned above, for all Z except N, the CBM+1 valley at K moves downward towards CBM for heavier Z atoms. This shift is accommodated by increasing the dispersion of CBM valley. As a result, the larger is the shift of CBM+1 valley, the larger is the dispersion of CBM at K. However, for W-based MLs, the change in the dispersion of CBM valley is only prominent in case of ML WSi\({}_{2}\)Sb\({}_{4}\). This discrepancy can be understood by observing the splitting of CBM valley between K-\(\Gamma\) and K-M. This splitting increases with heavier Z elements and protects the dispersion CBM valley by deforming dispersion of CBM+1 valley. For ML WSi\({}_{2}\)Sb\({}_{4}\), due to large splitting of both CBM and CBM+1 valleys, the dispersion of both valleys changes significantly.
Next, the elemental composition of frontier bands was analyzed for an exemplary system (ML WSi\({}_{2}\)P\({}_{4}\)) which revealed that VBM and CBM are mainly composed of different orbitals of W (M-) atoms, see **Figure S4**. The isosurface of second highest valley (VBM -1) at \(\Gamma\) point is mainly composed by dz\({}^{2}\) orbitals of W atoms with a considerable portion of isosurface on the A-Z bond positioned along the layer thickness. Changing Z atom increases the layer thickness and results in different overlap of these two isosurfaces. This different extent of overlap moves \(\Gamma\) valley on the energy scale[44, 45], which controls the size or nature of the band gap. Surprisingly, the outer layers have no contribution in forming the frontier states, which may be the cause that its band edges are well protected by the local environment.[31]
Figure 2: Band structures of MSi\({}_{2}\)Z\({}_{4}\) MLs calculated at the PBE level of theory including spin-orbit coupling for Mo (top panel) and W (bottom panel) based systems. The Fermi level, set in the mid-gap to zero, is marked by a dashed line. The fundamental gap is indicated by a blue arrow. In the upper left most panel, \(\Lambda\) valley is marked.
The trend of band gap size, the frontier band dispersion, and the spin-splitting remain unchanged within GW approximation, and thus, WSi\({}_{2}\)N\({}_{4}\) (WSi\({}_{2}\)Sb\({}_{4}\)) has the largest (smallest) QP gap, see Table 1. The only qualitative change occurs in the electronic structure of WSi\({}_{2}\)N\({}_{4}\), where VBM shifts from \(\Gamma\) to K point, hence, turning it into direct gap semiconductor with the band gap of 3.05 eV. A similar indirect-direct transition for ML WSi\({}_{2}\)N\({}_{4}\) was also reported when band structure was corrected with hybrid (HSE06) functionals [36]. However, ML MoS\({}_{2}\)N\({}_{4}\) remains indirect within GW approximation, but the energy difference between valence band maxima at K and \(\Gamma\) points reduces to 139 meV. Since these values are sensitive to the convergence parameters of the GW calculations, we checked them carefully, as reported in the SI.
## 3 Optical absorption spectrum and exciton wave functions
To assess the optical performance of MSi\({}_{2}\)Z\({}_{4}\) MLs, the optical absorption spectra were computed by solving the BSE, see results in **Figure 3** and **Table 1**. The N-containing MLs are characterized by the first excitation in the visible range, see **Figure 3**a and **e**, while other ones have their onset in the IR region, see **Figure 3**. The lowest energy excitons in all materials stem from the transition between the VBM and the CBM, see detailed analysis in the SI, **Figure57** and **Figure58**.
Above the absorption onset, all materials exhibit large absorption with pronounced maxima at the upper boundary of the visible region (see Figure 3). Interestingly, these features overlap well with the lowest-energy peaks in the spectra of the N-containing MLs, suggesting the possibility of stacking them with other, heavier members of this material family (e.g., containing P or As) to form tandem solar cells. To check the accuracy of our calculations, we compared the calculated energies of the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) excitations with the available experimental data [28]. Hong et al. [28], reported the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) peaks of MoSi\({}_{2}\)N\({}_{4}\) at 2.210 and 2.350 eV, respectively, corresponding to the two direct excitations originating from the VBM splitting of 140 meV. In our calculations, the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) excitation peaks of MoSi\({}_{2}\)N\({}_{4}\) are at 2.470 and 2.618 eV which are slightly blue-shifted and exhibit a similar splitting to the experimental one. For reference, the VBM splitting of MoSi\({}_{2}\)N\({}_{4}\) at the PBE level is 130 meV.
The energies of the first excitation and the binding energies decrease by increasing the mass of Z, while the oscillator strength follows the opposite trend. In general, the W-based MLs exhibit excitations with larger intensities than their Mo-based counterparts. The lowest-energy excitons of WSi\({}_{2}\)N\({}_{4}\) and MoSi\({}_{2}\)N\({}_{4}\) (WSi\({}_{2}\)Sb\({}_{4}\) and MoSi\({}_{2}\)Sb\({}_{4}\)) have the largest (smallest) binding energies, equal to 450 and 390 meV (160 and 120 meV), respectively. Regardless of the larger thickness of MA\({}_{2}\)Z\({}_{4}\), their binding energies and band gaps follow the linear scaling law, as previously reported for conventional 2D materials, see **Figure 4a**[46]. It should be noted that the values of the binding energies are highly sensitive to the convergence parameters employed in the GW and BSE calculations. For a thorough discussion in this regard, see **Figure S2-3** and **Figure S5-6** for the convergence of GW band gap and binding energy.
In addition to the usual analysis of the exciton binding energies _versus_ the QP gap, we inspected the trends for E\({}_{\text{b}}\) with respect to the static dielectric screening calculated from the random-phase approximation entering the solution of the BSE.[47, 48] From the results plotted in **Figure 4b**, we notice that, contrary to the intuition based on the number of electrons in the metal atoms, W-containing MLs feature a lower screening than their Mo-based counterparts. We assign this behavior to the structural characteristics of the MSi\({}_{2}\)Z\({}_{4}\). As shown in **Table 1**, the thickness of the MLs ranges from 7.0 to 10.9 A, for going towards heavier Z elements, but it does not depend on the M atoms, which are buried in the inner layer. Due to the same thickness of MLs with same
Figure 3: Optical absorption spectra of the considered MSi\({}_{2}\)Z\({}_{4}\) MLs. The vertical dashed lines indicate the GW band gaps and the solid blue bars, the first excitation. For reference, the visible light with IR and UV range is marked in the middle.
non-metallic, MoSi\({}_{2}\)Z\({}_{4}\) are, thus, more compact than WSi\({}_{2}\)Z\({}_{4}\) and, as such, feature larger polarizability.
Examining now the results shown in **Figure 4**b, we find that the biding energies decrease with increasing size of the Z element, upon which its layer thickness becomes larger and the QP gap smaller. The approximately linear behavior identified with respect to the gap in **Figure 4**a is not reproduced for the static screening. However, the trend shown in **Figure 4**b suggests closer similarities among the same non-metallic compositions (MoSi\({}_{2}\)Z\({}_{4}\) and WSi\({}_{2}\)Z\({}_{4}\)) at varying Z than among materials with the same metal element. Again, this finding is promising in view of constructing heterostructures based on these systems: by choosing equal non-metallic compositions, which also exhibit negligible lattice mismatch as discussed above, it can lead to a bilayer in which approximately the same amount of energy is required to dissociate the excitons in either side of the interface. This prediction holds particularly for P- and As-containing materials.
Figure 4: Binding energies of the lowest-energy exciton plotted with respect to a) the QP gap and b) the static screening. The linear fitting is indicated by the red dashed line.
Summary and Conclusions
In summary, we have investigated with first-principles many-body calculations the quasi-particle electronic structure and the optical properties of MSi\({}_{2}\)Z\({}_{4}\) monolayers with M = Mo, W and Z = N, P, As, Sb. All systems have a fundamental direct band gap with the valence band maximum (VBM) and conduction band minimum (CBM) at the K point ranging from about 3 eV in WSi\({}_{2}\)N\({}_{4}\) down to about 0.15 eV in WSi\({}_{2}\)Sb\({}_{4}\). MoSi\({}_{2}\)N\({}_{4}\) features an indirect QP gap of approximately 2.7 eV with the VBM at \(\Gamma\): it differs by about 200 meV from the optical gap. Upon incrementing the mass of Z, the spin-orbit splitting increases leading to a decrease of the band gap. WSi\({}_{2}\)N\({}_{4}\) has the largest band gap and WSi\({}_{2}\)Sb\({}_{4}\) offers the smallest band gap. Unlike conventional TMDCs, MSi\({}_{2}\)Z\({}_{4}\) do not exhibit the \(\Lambda\) valley in the conduction region, which is also known as channel for nonradiative recombination, quenching photoluminescence efficiency. The considered materials have an intriguing composition-dependent absorption. The spectra of MoSi\({}_{2}\)N\({}_{4}\) and WSi\({}_{2}\)N\({}_{4}\) have their absorption onset in the visible region, while all the other materials absorb IR radiation. All monolayers exhibit intense excitations at the onset indicating they are good light absorbers. The first exciton stems from the transition between the VBM and CBM meaning that electron and hole dissociated from the exciton will be localized. Exciton binding energies range from 0.42 eV in WSi\({}_{2}\)N\({}_{4}\) to 0.12 eV in MoSi\({}_{2}\)Sb\({}_{4}\), much lower than these of corresponding TMDC MLs. In general, N- (Sb-) based monolayers have the largest (smallest) binding energy, which decreases linearly when changing the Z element from N to Sb, following a linear scaling law between the band gaps and binding energies in 2D materials[46]. By contrasting binding energies with the calculated value of the static screening, one does not notice an equally clear trend. Nonetheless, important findings include the larger dielectric screening of the Mo-based MLs compared to the W-based ones, due to the more compact structure of the latter, as well as the larger similarity exhibited by materials differing only by the metal atom.
The results of this study indicate MSi\({}_{2}\)Z\({}_{4}\) as a family of favorable light absorbers at energies that vary with their composition. The systems with Z = P, As, Sb, absorbing IR radiation, can be favorably combined with other low-dimensional semiconductors, such as conventional TDMCs or even MSi\({}_{2}\)N\({}_{4}\), that absorb in the visible range to form tandem solar cells. The combination of
several MSi\({}_{2}\)Z\({}_{4}\) layers, also with different compositions, is particularly attractive due to the lattice parameter independent of the metal atom for a given combination or A and Z atoms. Dedicated studies exploring these heterostructures are needed to assess their behavior, but our results provide a suitable basis to conduct this analysis.
### 4.1 Computational details
All calculations presented in this work were performed within the framework of DFT [49] and many-body perturbation theory[47] as implemented in the Vienna ab-initio simulation (VASP) code [50]. The interactions between electrons and nuclei in the Kohn-Sham (KS) equations [51] were treated with the projector augmented wave (PAW) method [52]. In all calculations, the generalized gradient approximation for the exchange correlational potential as proposed by Perdew, Burke, and Ernzerhof (PBE) [53] was employed, along with Grimme's DFT-D3 dispersion correction [54] to account for the contributions of van der Waals (vdW) interactions. Spin-orbit coupling (SOC) was included in all calculations. To minimize spurious interactions between periodic images, a vacuum of ~15 A was inserted along the non-periodic lattice vector. The unit-cell parameters and the atomic positions were optimized using a \(\Gamma\)-centered 12x12x1 **k**-point mesh with a plane wave energy cutoff of 500 eV. The structures were optimized until the residual interatomic forces were less than 10 meV A-1, with an electronic self-consistency convergence threshold of 10\({}^{-8}\) eV. Crystal structures were visualized using VESTA [55].
The optical properties were calculated from the GW approximation and the solution of the Bethe-Salpeter equation (BSE). The quasiparticle eigenvalues were obtained starting from the PBE energies and wave functions by solving the quasiparticle (QP) equation[29]:
\[[T+V_{ext}+V_{H}+\sum(E_{m}^{QP})]\psi_{m}^{QP}=E_{m}^{QP}\psi_{m}^{QP}\,, \tag{1}\]
where the self-energy operator \(\sum\) is calculated in the single-shot flavor (GoW\({}_{0}\)) of the GW approximation[56]. The other terms in Eq. (1) are the kinetic energy (\(T\)), the external potential accounting for the electron-nuclear attraction (\(V_{ext}\)), and the Hartree potential (\(V_{h}\)); \(E_{m}^{QP}\) and \(\psi_{m}^{QP}\) are the single-particle energies and the wave functions corrected with the self-energy contribution. Optical absorption spectra were obtained by solving the BSE, the equation of
motion of the two-particle correlation function [57], on top of the QP electronic structure. In practice, the problem is cast into a Schrodinger-like equation with the form:
\[\big{(}E_{ck}^{QP}-E_{vk}^{QP}\big{)}A_{vck}^{S}+\sum_{c^{\prime}v^{\prime}}K_{( e-h)vck,v^{\prime}c^{\prime}k}(\Omega^{s})A_{c^{\prime}v^{\prime}k}^{S}= \Omega^{s}A_{vck}^{s}\, \tag{2}\]
where \(A_{vck}^{S}\) are exciton amplitudes, \(K_{(e-h)}\) is the kernel describing the electron-hole (e-h) interactions, and \(\Omega^{s}\) are the excitation energies. The coefficients, \(A_{vck}^{S}\), provide information about the single-particle transitions contributing to the exciton. They can be visualized in the reciprocal space using the so-called exciton weights defined as: [58]
\[w_{vk}^{S}=\sum_{c}\big{|}A_{vck}^{S}\big{|}\,\qquad w_{ck}^{S}=\sum_{v}\big{|}A_{ vck}^{S}\big{|}\]
where \(S\) is the index of the exciton.
In the GW calculations, we chose a total of 240 bands, a cutoff energy of 100 eV, 100 frequency points, and a Gaussian smearing of 50 meV. The **k**-point mesh to sample the Brillouin zone was doubled with respect to the choice adopted for the DFT calculations (24\(\times\)24\(\times\)1), except for WSi\({}_{2}\)Sb\({}_{4}\) where an 18\(\times\)18\(\times\)1 mesh was used. A plane wave energy cutoff of 400 eV was adopted in these calculations. A set of 4 valence bands and 8 conduction bands was used to construct and solve the BSE.
### 5.1 Data Availability
The data to reproduce the plots and findings within this paper are available from the corresponding author(s) upon reasonable request.
### 6.1 Conflict Of Interests
The authors declare no competing financial or non-financial interests
Acknowledgements
M.S.R and C.C. thank the funding by the Lower Saxony Ministry of Science and Culture (programs Professorinnen fur Niedersachsen and "Digitalization in the natural sciences", project SMART), by the Quanter ERA II European Union's Horizon 2020 research and innovation programme under the EQUAISE project, Grant Agreement No. 101017733, and by the Federal Ministry for Education and Research (Professorinnenprogramm III). The computational resources were provided by the high-performance computing center of ZIH Dresden and by the by the North German Computer Alliance (project nip00063). M.S.R., T.W., and A.K. thank the Deutsche Forschungsgemeinschaft (project GRK 2247/1 (QM3) and project CRC1415, number 417590517) for financial support and the high-performance computing center of ZIH Dresden for computational resources. A.K. also acknowledges association with priority program (project SPP2244 (2DMP)). T.W. also acknowledges the financial support of National Science Centre, Poland within Project No. 2021/41/N/ST3/04516
|